A Simulation Study of the Model Evaluation Criterion MMRE

  • Authors:
  • Tron Foss;Erik Stensrud;Barbara Kitchenham;Ingunn Myrtveit

  • Affiliations:
  • -;-;-;-

  • Venue:
  • IEEE Transactions on Software Engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.01

Visualization

Abstract

The Mean Magnitude of Relative Error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that used MMRE as a basis of model comparison. We therefore recommend not using MMRE to evaluate and compare prediction models. At present, we do not have any universal replacement for MMRE. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.