Skip to Main Content
We address the problem of rigorous testing of program generators. Program generators are software that take as input a model in a certain modeling language, and produce as output a program that captures the execution semantics of the input-model. In this sense, program generators are also programs and, at first sight, the traditional techniques for testing programs ought to be applicable to program generators as well. However, the rich semantic structure of the inputs and outputs of program generators poses unique challenges that have so far not been addressed sufficiently in the testing literature. We present a novel automatic test-case generation method for testing program generators. It is based on both syntax and semantics of the modeling language, and can uncover subtle semantic errors in the program generator. We demonstrate our method on flex, a prototypical lexical analyzer generator.