Communications of the ACM
Information Processing Letters
Partitioning and geometric embedding of range spaces of finite Vapnik-Chervonenkis dimension
SCG '87 Proceedings of the third annual symposium on Computational geometry
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Equivalence of models for polynomial learnability
Information and Computation
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Exploiting random walks for learning
Information and Computation
Hi-index | 0.00 |
An algorithm is a weak learner if with some small probability itoutputs a hypothesis with error slightly below 50%. This paper presentssufficient conditions for weak learning.Our main result requires a “consistency oracle” for theconcept class F which decides for a given set of examples whetherthere is a concept in F consistent with the examples. We show that such anoracle can be used to construct a computationally efficient weaklearning algorithm for F ifF is learnable at all. We considerconsistency oracles which are allowed to give wrong answers anddiscusses how the number of incorrect answers effects the oracle's usein computationally efficient weak learning algorihms.We also define “weak Occam algorithms” which, when given a set of m examples, select aconsistent hypothesis from some class of2m-(1/p(m))possible hypotheses. We show that these weak Occam algorithms are alsoweak learners. In contrast, we show that an Occam style algorithm whichselects a consistent hypothesis from a class of2m+1-2hypotheses is not a weak learner.