Original Contribution: Stacked generalization
Neural Networks
Machine Learning
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Computational Statistics & Data Analysis - Nonlinear methods and data mining
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Getting the Most Out of Ensemble Selection
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Cocktail Ensemble for Regression
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Artificial Intelligence Review
Statistics for High-Dimensional Data: Methods, Theory and Applications
Statistics for High-Dimensional Data: Methods, Theory and Applications
AI'11 Proceedings of the 24th international conference on Advances in Artificial Intelligence
Ensemble Methods: Foundations and Algorithms
Ensemble Methods: Foundations and Algorithms
Hi-index | 0.00 |
Bagging ensemble selection (BES) is a relatively new ensemble learning strategy. The strategy can be seen as an ensemble of the ensemble selection from libraries of models (ES) strategy. Previous experimental results on binary classification problems have shown that using random trees as base classifiers, BES-OOB (the most successful variant of BES) is competitive with (and in many cases, superior to) other ensemble learning strategies, for instance, the original ES algorithm, stacking with linear regression, random forests or boosting. Motivated by the promising results in classification, this paper examines the predictive performance of the BES-OOB strategy for regression problems. Our results show that the BES-OOB strategy outperforms Stochastic Gradient Boosting and Bagging when using regression trees as the base learners. Our results also suggest that the advantage of using a diverse model library becomes clear when the model library size is relatively large. We also present encouraging results indicating that the non-negative least squares algorithm is a viable approach for pruning an ensemble of ensembles.