An empirical validation of software cost estimation models
Communications of the ACM
Effort estimation using analogy
Proceedings of the 18th international conference on Software engineering
Cost estimation of software intensive projects: a survey of current practices
ICSE '91 Proceedings of the 13th international conference on Software engineering
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Selection of the optimal prototype subset for 1-NN classification
Pattern Recognition Letters
IEEE Transactions on Software Engineering - Special section on the seventh international software metrics symposium
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
An Empirical Study of Analogy-based Software Effort Estimation
Empirical Software Engineering
A Comparative Study of Cost Estimation Models for Web Hypermedia Applications
Empirical Software Engineering
A Review of Surveys on Software Effort Estimation
ISESE '03 Proceedings of the 2003 International Symposium on Empirical Software Engineering
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Validation methods for calibrating software effort models
Proceedings of the 27th international conference on Software engineering
Reliability and Validity in Comparative Studies of Software Prediction Models
IEEE Transactions on Software Engineering
Comparison between SLOCs and number of files as size metrics for software evolution analysis
CSMR '06 Proceedings of the Conference on Software Maintenance and Reengineering
A Systematic Review of Software Development Cost Estimation Studies
IEEE Transactions on Software Engineering
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Cross versus Within-Company Cost Estimation Studies: A Systematic Review
IEEE Transactions on Software Engineering
Finding Prototypes For Nearest Neighbor Classifiers
IEEE Transactions on Computers
IEEE Transactions on Software Engineering
Empirical evaluation of analogy-x for software cost estimation
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Theoretical Maximum Prediction Accuracy for Analogy-Based Software Cost Estimation
APSEC '08 Proceedings of the 2008 15th Asia-Pacific Software Engineering Conference
A study of project selection and feature weighting for analogy based software cost estimation
Journal of Systems and Software
Expert Systems with Applications: An International Journal
Why comparative effort prediction studies may be invalid
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Cost Estimation Techniques for Web Projects
Cost Estimation Techniques for Web Projects
On the relative value of cross-company and within-company data for defect prediction
Empirical Software Engineering
When to use data from other projects for effort estimation
Proceedings of the IEEE/ACM international conference on Automated software engineering
Experiences on Developer Participation and Effort Estimation
SEAA '11 Proceedings of the 2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications
How to Find Relevant Data for Effort Estimation?
ESEM '11 Proceedings of the 2011 International Symposium on Empirical Software Engineering and Measurement
Ecological inference in empirical software engineering
ASE '11 Proceedings of the 2011 26th IEEE/ACM International Conference on Automated Software Engineering
Exploiting the Essential Assumptions of Analogy-Based Effort Estimation
IEEE Transactions on Software Engineering
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Hi-index | 0.00 |
Background: Size features such as lines of code and function points are deemed essential for effort estimation. No one questions under what conditions size features are actually a "must". Aim: To question the need for size features and to propose a method that compensates their absence. Method: A baseline analogy-based estimation method (1NN) and a state-of-the-art learner (CART) are run on reduced (with no size features) and full (with all features) versions of 13 SEE data sets. 1NN is augmented with a popularity-based pre-processor to create "pop1NN". The performance of pop1NN is compared to 1NN and CART using 10-way cross validation w.r.t. MMRE, MdMRE, MAR, PRED(25), MBRE, MIBRE, and MMER. Results: Without any pre-processor, removal of size features decreases the performance of 1NN and CART. For 11 out of 13 data sets, pop1NN removes the necessity of size features. pop1NN (using reduced data) has a comparable performance to CART (using full data). Conclusion: Size features are important and their use is endorsed. However, if there are insufficient means to collect software size metrics, then the use of methods like pop1NN may compensate for size metrics with only a small loss in estimation accuracy.