A general lower bound on the number of examples needed for learning
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Learnability with respect to fixed distributions
Theoretical Computer Science
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
The nature of statistical learning theory
The nature of statistical learning theory
Improved lower bounds for learning from noisy examples: an information-theoretic approach
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Strong Minimax Lower Bounds for Learning
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Distribution-Dependent Vapnik-Chervonenkis Bounds
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Journal of Computer and System Sciences
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the margin-adapted dimension, which is a simple function of the second order statistics of the data distribution, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the margin-adapted dimension of the data distribution. The upper bounds are universal, and the lower bounds hold for the rich family of sub-Gaussian distributions with independent features. We conclude that this new quantity tightly characterizes the true sample complexity of large-margin classification. To prove the lower bound, we develop several new tools of independent interest. These include new connections between shattering and hardness of learning, new properties of shattering with linear classifiers, and a new lower bound on the smallest eigenvalue of a random Gram matrix generated by sub-Gaussian variables. Our results can be used to quantitatively compare large margin learning to other learning rules, and to improve the effectiveness of methods that use sample complexity bounds, such as active learning.