An empirical validation of software cost estimation models
Communications of the ACM
Machine Learning Approaches to Estimating Software Development Effort
IEEE Transactions on Software Engineering
Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Bayesian Analysis of Empirical Software Engineering Cost Models
IEEE Transactions on Software Engineering
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
An investigation of machine learning based prediction systems
Journal of Systems and Software - Special issue on empirical studies of software development and evolution
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
When Will It Be Done? Machine Learner Answers to the 300-Billion-Dollar Question
IEEE Intelligent Systems
The effects of software process maturity on software development effort
The effects of software process maturity on software development effort
Finding the Right Data for Software Cost Modeling
IEEE Software
Development of a hybrid cost estimation model in an iterative manner
Proceedings of the 28th international conference on Software engineering
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Column Pruning Beats Stratification in Effort Estimation
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Confidence in software cost estimation results based on MMRE and PRED
Proceedings of the 4th international workshop on Predictor models in software engineering
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Using uncertainty as a model selection and comparison criterion
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Software cost estimation using fuzzy logic
ACM SIGSOFT Software Engineering Notes
Scope error detection and handling concerning software estimation models
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
ICACT'10 Proceedings of the 12th international conference on Advanced communication technology
Stable rankings for different effort models
Automated Software Engineering
International Journal of Bio-Inspired Computation
Predicting software project effort: A grey relational analysis based method
Expert Systems with Applications: An International Journal
Software project effort assessment
Journal of Software Maintenance and Evolution: Research and Practice
Size doesn't matter?: on the value of software size features for effort estimation
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
Handling categorical variables in effort estimation
Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement
Automated trendline generation for accurate software effort estimation
Proceedings of the 3rd annual conference on Systems, programming, and applications: software for humanity
Cost estimation for model-driven engineering
MODELS'12 Proceedings of the 15th international conference on Model Driven Engineering Languages and Systems
Hi-index | 0.00 |
COCONUT calibrates effort estimation models using an ex-haustive search over the space of calibration parameters in a COCOMO I model. This technique is much simpler than other effort estimation method yet yields PRED levels com-parable to those other methods. Also, it does so with less project data and fewer attributes (no scale factors). How-ever, a comparison between COCONUT and other methods is complicated by differences in the experimental methods used for effort estimation. A review of those experimental methods concludes that software effort estimation models should be calibrated to local data using incremental hold-out (not jack knife) studies, combined with randomization and hypothesis testing, repeated a statistically significant number of times.