By Topic

A simulation study of the model evaluation criterion MMRE

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Foss, T. ; Norwegian Sch. of Manage., Sandvika, Norway ; Stensrud, E. ; Kitchenham, B. ; Myrtveit, I.

The mean magnitude of relative error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that use MMRE as a basis of model comparison. We therefore recommend not using MMRE to evaluate and compare prediction models. At present, we do not have any universal replacement for MMRE. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.

Published in:

Software Engineering, IEEE Transactions on  (Volume:29 ,  Issue: 11 )