Validation methods for calibrating software effort models

  • Authors:
  • Tim Menzies;Dan Port;Zhihao Chen;Jairus Hihn;Sherry Stukes

  • Affiliations:
  • Portland State University;University of Hawaii, Manoa;University of Southern California;Jet Propulsion Laboratory, Pasadena, CA;Jet Propulsion Laboratory, Pasadena, CA

  • Venue:
  • Proceedings of the 27th international conference on Software engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

COCONUT calibrates effort estimation models using an ex-haustive search over the space of calibration parameters in a COCOMO I model. This technique is much simpler than other effort estimation method yet yields PRED levels com-parable to those other methods. Also, it does so with less project data and fewer attributes (no scale factors). How-ever, a comparison between COCONUT and other methods is complicated by differences in the experimental methods used for effort estimation. A review of those experimental methods concludes that software effort estimation models should be calibrated to local data using incremental hold-out (not jack knife) studies, combined with randomization and hypothesis testing, repeated a statistically significant number of times.