Neural networks and the bias/variance dilemma
Neural Computation
Machine Learning
Boosting regression estimators
Neural Computation
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Sparse Regression Ensembles in Infinite and Finite Hypothesis Spaces
Machine Learning
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Short communication: Uncertainty measures for fuzzy relations and their applications
Applied Soft Computing
Symmetric-embedding prediction of the CATS benchmark
Neurocomputing
EROS: Ensemble rough subspaces
Pattern Recognition
ENSEMBLE ARTIFICIAL NEURAL NETWORKS FOR PREDICTION OF DEW POINT TEMPERATURE
Applied Artificial Intelligence
Predicting software reliability with neural network ensembles
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.