Software engineering metrics and models
Software engineering metrics and models
Bayesian methods for adaptive models
Bayesian methods for adaptive models
An assessment and comparison of common software cost estimation modeling techniques
Proceedings of the 21st international conference on Software engineering
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Software Engineering Economics
Software Engineering Economics
Experience With the Accuracy of Software Maintenance Task Effort Prediction Models
IEEE Transactions on Software Engineering
Regression Models of Software Development Effort Estimation Accuracy and Bias
Empirical Software Engineering
Probabilistic Modelling in Bioinformatics and Medical Informatics
Probabilistic Modelling in Bioinformatics and Medical Informatics
Validation methods for calibrating software effort models
Proceedings of the 27th international conference on Software engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
Software Estimation: Demystifying the Black Art
Software Estimation: Demystifying the Black Art
Software project economics: a roadmap
FOSE '07 2007 Future of Software Engineering
Journal of Systems and Software
Scope error detection and handling concerning software estimation models
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
Hi-index | 0.00 |
Over the last 25+ years, software estimation research has been searching for the best model for estimating variables of interest (e.g., cost, defects, and fault proneness). This research effort has not lead to a common agreement. One problem is that, they have been using accuracy as the basis for selection and comparison. But accuracy is not invariant; it depends on the test sample, the error measure, and the chosen error statistics (e.g., MMRE, PRED, Mean and Standard Deviation of error samples). Ideally, we would like an invariant criterion. In this paper, we show that uncertainty can be used as an invariant criterion to figure out which estimation model should be preferred over others. The majority of this work is empirically based, applying Bayesian prediction intervals to some COCOMO model variations with respect to a publicly available cost estimation data set coming from the PROMISE repository.