Communications of the ACM
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Machine learning: a theoretical approach
Machine learning: a theoretical approach
Computational learning theory: an introduction
Computational learning theory: an introduction
Toward efficient agnostic learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
A result of Vapnik with applications
Discrete Applied Mathematics
Fat-shattering and the learnability of real-valued functions
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
An introduction to computational learning theory
An introduction to computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Valid generalisation of functions from close approximations on a sample
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
Function learning from interpolation
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Approximation and learning of convex superpositions
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Cross-validation for binary classification by real-valued functions: theoretical analysis
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Hi-index | 0.00 |
In this largely expository article, we highlight the significance of various types of ’dimension‘ for obtaining uniform convergence results in probability theory and we demonstrate how these results lead to certain notions of generalization for classes of binary-valued and real-valued functions. We also present new results on the generalization ability of certain types of artificial neural networks with real output.