How tight are the Vapnik-Chervonenkis bounds?
Neural Computation
Rigorous learning curve bounds from statistical mechanics
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Estimating learning curves of concept learning
Neural Networks
Towards more practical average bounds on supervised learning
IEEE Transactions on Neural Networks
A tight bound on concept learning
IEEE Transactions on Neural Networks
How Bad May Learning Curves Be?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modelling Classification Performance for Large Data Sets
WAIM '01 Proceedings of the Second International Conference on Advances in Web-Age Information Management
Hi-index | 0.00 |
Learning curves exhibit a diversity of behaviors such as phase transition. However, the understanding of learning curves is still extremely limited, and existing theories can give the impression that without empirical studies (e.g., cross validation), one can probably do nothing more than qualitative interpretations. In this note, we propose a theory of learning curves based on the idea of reducing learning problems to hypothesistesting ones. This theory provides a simple approach that is potentially useful for predicting and interpreting (a diversity of) learning curve behaviors qualitatively and quantitatively, and it applies to finite training sample size and finite learning machine and for learning situations not necessarily within the Bayesian framework. We illustrate the results by examining some exponential learning curve behaviors observed in Cohn and Tesauro (1992)'s experiment.