A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Bounding sample size with the Vapnik-Chervonenkis dimension
Discrete Applied Mathematics
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
Strong minimax lower bounds for learning
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Characterizing rational versus exponential learning curves
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Lower bounds for the rate of convergence in nonparametric pattern recognition
Theoretical Computer Science
Lower Bounds on the Rate of Convergence of Nonparametric Pattern Recognition
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
On the Foundations of Noise-free Selective Classification
The Journal of Machine Learning Research
Activized learning: transforming passive to active with improved label complexity
The Journal of Machine Learning Research
Distribution-dependent sample complexity of large margin learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Minimax lower bounds for concept learning state, forexample, that for each sample size n and learning rule g_n, thereexists a distribution of the observation X and a concept C to be learnt such that the expectederror of g_n is at least a constant times V/n,where V is the vc dimension of the concept class. However, these bounds do not tell anything about therate of decrease of the error for a fixed distribution-concept pair.In this paper we investigate minimax lower bounds in such a (stronger) sense. We show that forseveral natural k-parameter concept classes, including theclass of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks,for any sequence of learningrules g_n, there exists a fixed distribution of X and a fixed conceptC such that the expected error is larger than a constant timesk/n for infinitely many n. We also obtain suchstrong minimax lower bounds for the tail distribution of theprobability of error, which extend the corresponding minimaxlower bounds.