Machine Learning
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Software Engineering Economics
Software Engineering Economics
An empirical study of maintenance and development estimation accuracy
Journal of Systems and Software
Dealing with Missing Software Project Data
METRICS '03 Proceedings of the 9th International Symposium on Software Metrics
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
A Systematic Review of Software Development Cost Estimation Studies
IEEE Transactions on Software Engineering
Cross versus Within-Company Cost Estimation Studies: A Systematic Review
IEEE Transactions on Software Engineering
Applying moving windows to software effort estimation
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
How effective is Tabu search to configure support vector regression for effort estimation?
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Proceedings of the 33rd International Conference on Software Engineering
On parameter tuning in search based software engineering
SSBSE'11 Proceedings of the Third international conference on Search based software engineering
Special issue on repeatable results in software engineering prediction
Empirical Software Engineering
Data Mining Techniques for Software Effort Estimation: A Comparative Study
IEEE Transactions on Software Engineering
Evaluating prediction systems in software project estimation
Information and Software Technology
A review of studies on expert estimation of software development effort
Journal of Systems and Software
Can cross-company data improve performance in software effort estimation?
Proceedings of the 8th International Conference on Predictive Models in Software Engineering
On the Value of Ensemble Effort Estimation
IEEE Transactions on Software Engineering
Local versus Global Lessons for Defect Prediction and Effort Estimation
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
Background: The use of machine learning approaches for software effort estimation (SEE) has been studied for more than a decade. Most studies performed comparisons of different learning machines on a number of data sets. However, most learning machines have more than one parameter that needs to be tuned, and it is unknown to what extent parameter settings may affect their performance in SEE. Many works seem to make an implicit assumption that parameter settings would not change the outcomes significantly. Aims: To investigate to what extent parameter settings affect the performance of learning machines in SEE, and what learning machines are more sensitive to their parameters. Method: Considering an online learning scenario where learning machines are updated with new projects as they become available, systematic experiments were performed using five learning machines under several different parameter settings on three data sets. Results: While some learning machines such as bagging using regression trees were not so sensitive to parameter settings, others such as multilayer perceptrons were affected dramatically. Combining learning machines into bagging ensembles helped making them more robust against different parameter settings. The average performance of k-NN across different projects was not so much affected by different parameter settings, but the parameter settings that obtained the best average performance across time steps were not so consistently the best throughout time steps as in the other approaches. Conclusions: Learning machines that are more/less sensitive to different parameter settings were identified. The different sensitivity obtained by different learning machines shows that sensitivity to parameters should be considered as one of the criteria for evaluation of SEE approaches. A good learning machine for SEE is not only one which is able to achieve superior performance, but also one that is either less dependent on parameter settings or to which good parameter choices are easy to make.