Abstract:
Researchers have proposed a number of automated techniques for testing refactoring engines. However, they may have limitations related to the program generator, time cons...Show MoreMetadata
Abstract:
Researchers have proposed a number of automated techniques for testing refactoring engines. However, they may have limitations related to the program generator, time consumption, kinds of bugs, and debugging. We propose a technique to scale testing of refactoring engines. We improve expressiveness of a program generator, use a technique to skip some test inputs to improve performance, and propose new oracles to detect behavioral changes using change impact analysis, overly strong conditions using mutation testing, and transformation issues related to the refactoring definitions. We evaluate our technique in 24 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse) and found 119 bugs. The technique reduces the time in 96% using skips while misses only 7% of the bugs. Using the new oracle to identify overly strong conditions, it detects 37% of new bugs while misses 16% of the bugs comparing with a previous technique. Furthermore, the proposed oracle facilitates debugging by indicating the overly strong conditions.
Published in: 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C)
Date of Conference: 14-22 May 2016
Date Added to IEEE Xplore: 23 March 2017
ISBN Information:
Conference Location: Austin, TX, USA