Introduction to algorithms
The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Boosting regression estimators
Neural Computation
Ensemble learning via negative correlation
Neural Networks
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
Statistical Models in S
Computers and Intractability; A Guide to the Theory of NP-Completeness
Computers and Intractability; A Guide to the Theory of NP-Completeness
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Pruning in ordered bagging ensembles
ICML '06 Proceedings of the 23rd international conference on Machine learning
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Using boosting to prune bagging ensembles
Pattern Recognition Letters
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
RotBoost: A technique for combining Rotation Forest and AdaBoost
Pattern Recognition Letters
Class-switching neural network ensembles
Neurocomputing
Expert Systems with Applications: An International Journal
An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Regularized Linear Models in Stacked Generalization
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Issues in stacked generalization
Journal of Artificial Intelligence Research
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Modern Applied Statistics with S
Modern Applied Statistics with S
Diversity regularized ensemble pruning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Neural network modeling of vector multivariable functions in ill-posed approximation problems
Journal of Computer and Systems Sciences International
Hi-index | 0.01 |
Identifying the optimal subset of regressors in a regression bagging ensemble is a difficult task that has exponential cost in the size of the ensemble. In this article we analyze two approximate techniques especially devised to address this problem. The first strategy constructs a relaxed version of the problem that can be solved using semidefinite programming. The second one is based on modifying the order of aggregation of the regressors. Ordered aggregation is a simple forward selection algorithm that incorporates at each step the regressor that reduces the training error of the current subensemble the most. Both techniques can be used to identify subensembles that are close to the optimal ones, which can be obtained by exhaustive search at a larger computational cost. Experiments in a wide variety of synthetic and real-world regression problems show that pruned ensembles composed of only 20% of the initial regressors often have better generalization performance than the original bagging ensembles. These improvements are due to a reduction in the bias and the covariance components of the generalization error. Subensembles obtained using either SDP or ordered aggregation generally outperform subensembles obtained by other ensemble pruning methods and ensembles generated by the Adaboost.R2 algorithm, negative correlation learning or regularized linear stacked generalization. Ordered aggregation has a slightly better overall performance than SDP in the problems investigated. However, the difference is not statistically significant. Ordered aggregation has the further advantage that it produces a nested sequence of near-optimal subensembles of increasing size with no additional computational cost.