The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
Information and Computation
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Adaptive mixtures of local experts
Neural Computation
Hi-index | 0.00 |
One basic property of the boosting algorithmis its ability to reduce the training error, subject to the critical assumption that the base learners generate 'weak' (or more appropriately, 'weakly accurate') hypotheses that are better that random guessing. We exploit analogies between regression and classification to give a characterization on what base learners generate weak hypotheses, by introducing a geometric concept called the angular span for the base hypothesis space. The exponential convergence rates of boosting algorithms are shown to be bounded below by essentially the angular spans. Sufficient conditions for nonzero angular span are also given and validated for a wide class of regression and classification systems.