Machine Learning Approaches to Estimating Software Development Effort
IEEE Transactions on Software Engineering
Machine Learning
Estimating Software Project Effort Using Analogies
IEEE Transactions on Software Engineering
Bayesian Analysis of Empirical Software Engineering Cost Models
IEEE Transactions on Software Engineering
Ensemble learning via negative correlation
Neural Networks
A Validation of the Component-Based Method for Software Size Estimation
IEEE Transactions on Software Engineering - special section on current trends in exception handling—part II
ACM SIGSOFT Software Engineering Notes
Software Engineering Economics
Software Engineering Economics
Clustering Algorithms
Software Cost Estimation with Cocomo II with Cdrom
Software Cost Estimation with Cocomo II with Cdrom
Dealing with Missing Software Project Data
METRICS '03 Proceedings of the 9th International Symposium on Software Metrics
A Simulation Study of the Model Evaluation Criterion MMRE
IEEE Transactions on Software Engineering
Metrics Are Fitness Functions Too
METRICS '04 Proceedings of the Software Metrics, 10th International Symposium
Effort estimation of use cases for incremental large-scale software development
Proceedings of the 27th international conference on Software engineering
Estimating LOC for information systems from their conceptual data models
Proceedings of the 28th international conference on Software engineering
Design and Analysis of Experiments
Design and Analysis of Experiments
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A Systematic Review of Software Development Cost Estimation Studies
IEEE Transactions on Software Engineering
Selecting Best Practices for Effort Estimation
IEEE Transactions on Software Engineering
Muiltiobjective optimization using nondominated sorting in genetic algorithms
Evolutionary Computation
An empirical analysis of software effort estimation with outlier elimination
Proceedings of the 4th international workshop on Predictor models in software engineering
Conceptual data model-based software size estimation for information systems
ACM Transactions on Software Engineering and Methodology (TOSEM)
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Performance scaling of multi-objective evolutionary algorithms
EMO'03 Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization
Software Module Clustering as a Multi-Objective Search Problem
IEEE Transactions on Software Engineering
A principled evaluation of ensembles of learning machines for software effort estimation
Proceedings of the 7th International Conference on Predictive Models in Software Engineering
IEEE Transactions on Software Engineering
Special issue on repeatable results in software engineering prediction
Empirical Software Engineering
Data Mining Techniques for Software Effort Estimation: A Comparative Study
IEEE Transactions on Software Engineering
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Transactions on Evolutionary Computation
Simultaneous training of negatively correlated neural networks inan ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Regularized Negative Correlation Learning for Neural Network Ensembles
IEEE Transactions on Neural Networks
On the Value of Ensemble Effort Estimation
IEEE Transactions on Software Engineering
ACM Transactions on Software Engineering and Methodology (TOSEM)
Hi-index | 0.00 |
Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.