Skip to Main Content
In a model-driven engineering development process that focuses on guaranteeing that extra-functional concerns modeled at design level are preserved at platform execution level, the task of automated code generation must produce artifacts that enable back-annotation activities. In fact when the target platform code has been generated, quality attributes of the system are evaluated by appropriate code execution monitoring/analysis tools and their results back-annotated to the source models to be extensively evaluated. Only at this point the preservation of analysed extra-functional aspects can be either asserted or achieved by re-applying the code generation chain to the source models properly optimized according to the evaluation results. In this work we provide a solution for the problem of automatically generating target platform code from source models focusing on producing code artifacts that facilitate analysis and enable back-annotation activities. Arisen challenges and solutions are described together with completed and planned implementation of the proposed approach.