Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using Iterated Bagging to Debias Regressions
Machine Learning
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Combining bias and variance reduction techniques for regression trees
ECML'05 Proceedings of the 16th European conference on Machine Learning
Hi-index | 0.00 |
This work presents an experimental study of ensemble methods for regression, using Multilayer Perceptrons (MLP) as the base method and 61 datasets. The considered ensemble methods are Randomization, Random Subspaces, Bagging, Iterated Bagging and AdaBoost.R2. Surprisingly, because it is in contradiction to previous studies, the best overall results are for Bagging. The cause of this difference can be the base methods, MLP instead of regression or model trees. Diversity-error diagrams are used to analyze the behaviour of the ensemble methods. Compared to Bagging, the additional diversity obtained with other methods do not compensate the increase in the errors of the ensemble members.