Software engineering metrics and models
Software engineering metrics and models
Bayesian methods for adaptive models
Bayesian methods for adaptive models
A replicated assessment and comparison of common software cost modeling techniques
Proceedings of the 22nd international conference on Software engineering
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Software Engineering Economics
Software Engineering Economics
A Simulation Tool for Efficient Analogy Based Cost Estimation
Empirical Software Engineering
Estimates, Uncertainty, and Risk
IEEE Software
Experience With the Accuracy of Software Maintenance Task Effort Prediction Models
IEEE Transactions on Software Engineering
Probabilistic Modelling in Bioinformatics and Medical Informatics
Probabilistic Modelling in Bioinformatics and Medical Informatics
Validation methods for calibrating software effort models
Proceedings of the 27th international conference on Software engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
Software Estimation: Demystifying the Black Art
Software Estimation: Demystifying the Black Art
Software project economics: a roadmap
FOSE '07 2007 Future of Software Engineering
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Using uncertainty as a model selection and comparison criterion
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Hi-index | 0.00 |
Over the last 25+ years, the software community has been searching for the best models for estimating variables of interest (e.g., cost, defects, and fault proneness). However, little research has been done to improve the reliability of the estimates. Over the last decades, scope error and error analysis have been substantially ignored by the community. This work attempts to fill this gap in the research and enhance a common understanding within the community. Results provided in this study can eventually be used to support human judgment-based techniques and be an addition to the portfolio. The novelty of this work is that, we provide a way of detecting and handling the scope error arising from estimation models. The answer whether or not scope error will occur is a pre-condition to safe use of an estimation model. We also provide a handy procedure for dealing with outliers as to whether or not to include them in the training set for building a new version of the estimation model. The majority of the work is empirically based, applying computational intelligence techniques to some COCOMO model variations with respect to a publicly available cost estimation data set in the PROMISE repository.