Communications of the ACM
Artificial Intelligence
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Learnability by fixed distributions
COLT '88 Proceedings of the first annual workshop on Computational learning theory
From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Instance-Based Learning Algorithms
Machine Learning
Results on learnability and the Vapnik-Chervonenkis dimension
Information and Computation
Investigating the distribution assumptions in the Pac learning model
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Implementing Valiant's Learnability Theory Using Random Sets
Machine Learning
Bounding sample size with the Vapnik-Chervonenkis dimension
Discrete Applied Mathematics
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Effective classification learning
Effective classification learning
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Backpropagation applied to handwritten zip code recognition
Neural Computation
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Progressive rademacher sampling
Eighteenth national conference on Artificial intelligence
Hi-index | 0.00 |
We present new strategies for "probably approximately correct" (par) learning that use fewer training examples than previous approaches. The idea is to observe training examples one-at-a-time and decide "on-line" when to return a hypothesis, rather than collect a large fixed-size training sample. This yields sequential learning procedures that par-learn by observing a small random number of examples. We provide theoretical bounds on the expected training sample size of our procedure -- but establish its efficiency primarily by a scries of experiments which show sequential learning actually uses many times fewer training examples in practice. These results demonstrate that pac-learning can be far more efficiently achieved in practice than previously thought.