A Validation of the Component-Based Method for Software Size Estimation
IEEE Transactions on Software Engineering - special section on current trends in exception handling—part II
Software Engineering Economics
Software Engineering Economics
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A Systematic Review of Software Development Cost Estimation Studies
IEEE Transactions on Software Engineering
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Muiltiobjective optimization using nondominated sorting in genetic algorithms
Evolutionary Computation
Performance scaling of multi-objective evolutionary algorithms
EMO'03 Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization
The relationship between search based software engineering and predictive modeling
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
How multi-objective genetic programming is effective for software development effort estimation?
SSBSE'11 Proceedings of the Third international conference on Search based software engineering
Systematic literature review of machine learning based software development effort estimation models
Information and Software Technology
On the approximation ability of evolutionary optimization with application to minimum set cover
Artificial Intelligence
Data Mining Techniques for Software Effort Estimation: A Comparative Study
IEEE Transactions on Software Engineering
Evaluating prediction systems in software project estimation
Information and Software Technology
Single and Multi Objective Genetic Programming for software development effort estimation
Proceedings of the 27th Annual ACM Symposium on Applied Computing
Regularized Negative Correlation Learning for Neural Network Ensembles
IEEE Transactions on Neural Networks
On the Value of Ensemble Effort Estimation
IEEE Transactions on Software Engineering
Relationships between Diversity of Classification Ensembles and Single-Class Performance Measures
IEEE Transactions on Knowledge and Data Engineering
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Background: Previous work showed that Multi-objective Evolutionary Algorithms (MOEAs) can be used for training ensembles of learning machines for Software Effort Estimation (SEE) by optimising different performance measures concurrently. Optimisation based on three measures (LSD, MMRE and PRED(25)) was analysed and led to promising results in terms of performance on these and other measures. Aims: (a) It is not known how well ensembles trained on other measures would behave for SEE, and whether training on certain measures would improve performance particularly on these measures. (b) It is also not known whether it is best to include all SEE models created by the MOEA into the ensemble, or solely the models with the best training performance in terms of each measure being optimised. Investigating (a) and (b) is the aim of this work. Method: MOEAs were used to train ensembles by optimising four different sets of performance measures, involving a total of nine different measures. The performance of all ensembles was then compared based on all these nine performance measures. Ensembles composed of different sets of models generated by the MOEAs were also compared. Results: (a) Ensembles trained on LSD, MMRE and PRED (25) obtained the best results in terms of most performance measures, being considered more successful than the others. Optimising certain performance measures did not necessarily lead to the best test performance on these particular measures probably due to overfitting. (b) There was no inherent advantage in using ensembles composed of all the SEE models generated by the MOEA in comparison to using solely the best SEE model according to each measure separately. Conclusions: Care must be taken to prevent overfitting on the performance measures being optimised. Our results suggest that concurrently optimising LSD, MMRE and PRED (25) promoted more ensemble diversity than other combinations of measures, and hence performed best. Low diversity is more likely to lead to overfitting.