The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiclass learning, boosting, and error-correcting codes
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Machine Learning
Using output codes to boost multiclass learning problems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Option Decision Trees with Majority Votes
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Inference for the Generalization Error
Machine Learning
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Multiclass Boosting for Weak Classifiers
The Journal of Machine Learning Research
An empirical comparison of supervised learning algorithms
ICML '06 Proceedings of the 23rd international conference on Machine learning
Multiclass boosting with repartitioning
ICML '06 Proceedings of the 23rd international conference on Machine learning
How boosting the margin can also boost classifier complexity
ICML '06 Proceedings of the 23rd international conference on Machine learning
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
A Real generalization of discrete AdaBoost
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Beam search extraction and forgetting strategies on shared ensembles
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Boosting One-Class Support Vector Machines for Multi-Class Classification
Applied Artificial Intelligence
Classification by cluster analysis: a new meta-learning based approach
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
A biased selection strategy for information recycling in Boosting cascade visual-object detectors
Pattern Recognition Letters
Hi-index | 0.10 |
Boosting is a set of methods for the construction of classifier ensembles. The differential feature of these methods is that they allow to obtain a strong classifier from the combination of weak classifiers. Therefore, it is possible to use boosting methods with very simple base classifiers. One of the most simple classifiers are decision stumps, decision trees with only one decision node. This work proposes a variant of the most well-known boosting method, AdaBoost. It is based on considering, as the base classifiers for boosting, not only the last weak classifier, but a classifier formed by the last r selected weak classifiers (r is a parameter of the method). If the weak classifiers are decision stumps, the combination of r weak classifiers is a decision tree. The ensembles obtained with the variant are formed by the same number of decision stumps than the original AdaBoost. Hence, the original version and the variant produce classifiers with very similar sizes and computational complexities (for training and classification). The experimental study shows that the variant is clearly beneficial.