Skip to Main Content
Unlike for conventional compilers for imperative programming languages such as C or ADA, no established methods for safeguarding artifacts generated by model-based code generators exist despite progress in the field of formal verification. Several test approaches dominate the engineering practice. This paper describes a general and tool-independent test architecture for code generators used in model-based development. We evaluate the effectiveness of our test approach by means of testing optimizations performed by the TargetLink code generator, a widely accepted and complex development tool used in automotive model-based development.