Skip to Main Content
There is an open challenge in the area of model-driven requirements engineering. Model transformations that allow deriving (platform-independent) conceptual models from (computation-independent) requirements models are being proposed. However, rigorous assessments of the quality of the resulting conceptual models are needed. This paper reports a controlled experiment that compares the performance of subjects applying two different techniques for deriving object-oriented, UML-compliant conceptual models. We compare the quality of the OO-Method conceptual models obtained by applying a text-based derivation technique (which mimics what OO-Method practitioners actually do in real projects) with the quality obtained by applying a novel communication-based derivation technique (which takes as input Communication Analysis requirements models). The results show that there is an interaction between the derivation technique and the OO-Method modelling competence of the subject: the derivation technique has a significant impact on model completeness within the high-competence group. No impact has been observed on model validity. We also discuss new challenges raised by the evaluation.