Communications of the ACM
Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Equivalence of models for polynomial learnability
Information and Computation
The design and analysis of efficient learning algorithms
The design and analysis of efficient learning algorithms
Learning from a population of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Statistical queries and faulty PAC oracles
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Amplification of weak learning under the uniform distribution
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Conservativeness and monotonicity for learning algorithms
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
On the learnability of discrete distributions
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Learning DNF over the uniform distribution using a quantum example oracle
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Noise-tolerant parallel learning of geometric concepts
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Specification and simulation of statistical query algorithms for efficiency and noise tolerance
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Evaluation may be easier than generation (extended abstract)
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
Towards the learnability of DNF formulae
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
More efficient PAC-learning of DNF with membership queries under the uniform distribution
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
PAC Analogues of Perceptron and Winnow Via Boosting the Margin
Machine Learning
Boosting and Hard-Core Set Construction
Machine Learning
Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
On boosting with polynomially bounded distributions
The Journal of Machine Learning Research
Optimally-smooth adaptive boosting and application to agnostic learning
The Journal of Machine Learning Research
The monotone theory for the PAC-model
Information and Computation
More efficient PAC-learning of DNF with membership queries under the uniform distribution
Journal of Computer and System Sciences
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Smooth boosting using an information-based criterion
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
On attribute efficient and non-adaptive learning of parities and DNF expressions
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
In this work we present some improvements and extensions to previous work on boosting weak learners [Sch90, Fre90]. Our main result is an improvement of the boosting-by-majority algorithm. One implication of the performance of this algorithm is that if a concept class can be learned in the PAC model to within some fixed error smaller than 1/2, then it can be learned to within an arbitrarily small error &egr; 0 with time complexity 0((1/&egr;)(log 1/&egr;)2) (fixing the sample space and concept class and the required reliability). We show that the majority rule is the optimal rule for combining general weak learners. We also extend the boosting algorithm to concept classes that give multi-valued labels and real-valued labels.