API Knowledge Guided Test Generation for Machine Learning Libraries
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis proposes MUTester to generate test cases for APIs of machine learning libraries by leveraging the API constraints mined from the corresponding API documentation and the API usage patterns mined from code fragments in Stack Overflow (SO). First, we propose a set of 18 linguistic rules for mining API constraints from the API documents. Then, we use the frequent itemset mining technique to mine the API usage patterns from a large corpus of machine learning API related code fragments collected from SO. Finally, we use the above two types of API knowledge to guide the test generation of existing test generators, for machine learning libraries. To evaluate the performance of MUTester, we first collected 2,889 APIs from five widely used machine learning libraries (i.e., Scikit-learn, Pandas, Numpy, Scipy, and PyTorch),then for each API, we further extract their API knowledge, i.e., API constraints and API usage patterns. Given an API, MUTester combines its API knowledge with existing test generators (e.g., search-based test generator PyEvosuite and random test generator PyRandoop) to generate test cases to test the API. Results of our experiment show that MUTester can significantly improve the corresponding test generation methods. And the improvement in code coverage ranges from 18.0% to 41.9% on average.In addition, it also reduced 21% of invalid tests generated by the existing test generators.