Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
Boosting regression estimators
Neural Computation
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces
Machine Learning
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Soft computing in engineering design - A review
Advanced Engineering Informatics
PRIB '09 Proceedings of the 4th IAPR International Conference on Pattern Recognition in Bioinformatics
Predicting protein subcellular locations for Gram-negative bacteria using neural networks ensemble
CIBCB'09 Proceedings of the 6th Annual IEEE conference on Computational Intelligence in Bioinformatics and Computational Biology
Knowledge-based feedback integration to facilitate sustainable product innovation
ETFA'09 Proceedings of the 14th IEEE international conference on Emerging technologies & factory automation
Municipal revenue prediction by ensembles of neural networks and support vector machines
WSEAS Transactions on Computers
Ensemble learning for customers targeting
KSEM'11 Proceedings of the 5th international conference on Knowledge Science, Engineering and Management
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.