Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Toward efficient agnostic learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
The nature of statistical learning theory
The nature of statistical learning theory
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
A Technique for the Numerical Solution of Certain Integral Equations of the First Kind
Journal of the ACM (JACM)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
On the Convergence Rate of Good-Turing Estimators
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
The Journal of Machine Learning Research
The strength of weak learnability
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
Almost-everywhere algorithmic stability and generalization error
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Learnability beyond uniform convergence
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Toward nonlinear local reinforcement learning rules through neuroevolution
Neural Computation
Uniform convergence, stability and learnability for ranking problems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases. We show that in this setting, there are non-trivial learning problems where uniform convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of uniform convergence, we identify stability as the key necessary and sufficient condition for learnability. Moreover, we show that the conditions for learnability in the general setting are significantly more complex than in supervised classification and regression.