A spiral model of software development and enhancement
ACM SIGSOFT Software Engineering Notes
An empirical validation of software cost estimation models
Communications of the ACM
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Selection of the optimal prototype subset for 1-NN classification
Pattern Recognition Letters
Bayesian Analysis of Empirical Software Engineering Cost Models
IEEE Transactions on Software Engineering
A replicated assessment and comparison of common software cost modeling techniques
Proceedings of the 22nd international conference on Software engineering
Comparing Software Prediction Techniques Using Simulation
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
Proceedings of the 24th International Conference on Software Engineering
An Empirical Study of Analogy-based Software Effort Estimation
Empirical Software Engineering
A Comparative Study of Cost Estimation Models for Web Hypermedia Applications
Empirical Software Engineering
Benchmarking Attribute Selection Techniques for Discrete Class Data Mining
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Software Engineering
Validation methods for calibrating software effort models
Proceedings of the 27th international conference on Software engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
Simple software cost analysis: safe or unsafe?
PROMISE '05 Proceedings of the 2005 workshop on Predictor models in software engineering
Finding the Right Data for Software Cost Modeling
IEEE Software
Discretization from data streams: applications to histograms and data mining
Proceedings of the 2006 ACM symposium on Applied computing
Data Mining
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Software project economics: a roadmap
FOSE '07 2007 Future of Software Engineering
Cross versus Within-Company Cost Estimation Studies: A Systematic Review
IEEE Transactions on Software Engineering
Decision Support Analysis for Software Effort Estimation by Analogy
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Finding Prototypes For Nearest Neighbor Classifiers
IEEE Transactions on Computers
Analogy-X: Providing Statistical Inference to Analogy-Based Software Cost Estimation
IEEE Transactions on Software Engineering
A study of project selection and feature weighting for analogy based software cost estimation
Journal of Systems and Software
A study of the non-linear adjustment for analogy based software cost estimation
Empirical Software Engineering
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Customization support for CBR-based defect prediction
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
On the dataset shift problem in software engineering prediction models
Empirical Software Engineering
On the value of outlier elimination on software effort estimation research
Empirical Software Engineering
Is lines of code a good measure of effort in effort-aware models?
Information and Software Technology
Finding conclusion stability for selecting the best effort predictor in software effort estimation
Automated Software Engineering
Hi-index | 0.00 |
There exists a large and growing number of proposed estimation methods but little conclusive evidence ranking one method over another. Prior effort estimation studies suffered from "conclusion instability", where the rankings offered to different methods were not stable across (a) different evaluation criteria; (b) different data sources; or (c) different random selections of that data. This paper reports a study of 158 effort estimation methods on data sets based on COCOMO features. Four "best" methods were detected that were consistently better than the "rest" of the other 154 methods. These rankings of "best" and "rest" methods were stable across (a) three different evaluation criteria applied to (b) multiple data sets from two different sources that were (c) divided into hundreds of randomly selected subsets using four different random seeds. Hence, while there exists no single universal "best" effort estimation method, there appears to exist a small number (four) of most useful methods. This result both complicates and simplifies effort estimation research. The complication is that any future effort estimation analysis should be preceded by a "selection study" that finds the best local estimator. However, the simplification is that such a study need not be labor intensive, at least for COCOMO style data sets.