Abstract:
Automated test case generation for RESTful web APIs is a thriving research topic due to their key role in software integration. Most approaches in this domain follow a bl...Show MoreMetadata
Abstract:
Automated test case generation for RESTful web APIs is a thriving research topic due to their key role in software integration. Most approaches in this domain follow a black-box approach, where test cases are randomly derived from the API specification. These techniques show promising results, but they neglect constraints among input parameters (so-called inter-parameter dependencies), as these cannot be formally described in current API specification languages. As a result, when testing real-world services, most random test cases tend to be invalid since they violate some of the inter-parameter dependencies of the service, making human intervention indispensable. In this paper, we propose a deep learning-based approach for automatically predicting the validity of an API request (i.e., test input) before calling the actual API. The model is trained with the API requests and responses collected during the generation and execution of previous test cases. Preliminary results with five real-world RESTful APIs and 16K automatically generated test cases show that test inputs validity can be predicted with an accuracy ranging from 86% to 100% in APIs like Yelp, GitHub, and YouTube. These are encouraging results that show the potential of artificial intelligence to improve current test case generation techniques.
Published in: 2021 IEEE/ACM Third International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest)
Date of Conference: 01-01 June 2021
Date Added to IEEE Xplore: 13 July 2021
ISBN Information: