A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Computational learning theory: an introduction
Computational learning theory: an introduction
Characterizations of learnability for classes of {O, …, n}-valued functions
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Fat-shattering and the learnability of real-valued functions
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Generalization Error Bounds for Threshold Decision Lists
The Journal of Machine Learning Research
On the generalization error of fixed combinations of classifiers
Journal of Computer and System Sciences
Maximal width learning of binary functions
Theoretical Computer Science
Relaxed uncertainty relations and information processing
Quantum Information & Computation
Robust cutpoints in the logical analysis of numerical data
Discrete Applied Mathematics
Analysis of a multi-category classifier
Discrete Applied Mathematics
Learning bounds via sample width for classifiers on finite metric spaces
Theoretical Computer Science
Hi-index | 0.00 |
In this paper, we study a statistical property of classes of real-valued functions that we call approximation from interpolated examples. We derive a characterization of function classes that have this property, in terms of their ‘fat-shattering function’, a notion that has proved useful in computational learning theory. The property is central to a problem of learning real-valued functions from random examples in which we require satisfactory performance from every algorithm that returns a function which approximately interpolates the training examples.