Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Function Learning from Interpolation
Combinatorics, Probability and Computing
Covering numbers for real-valued function classes
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Robust cutpoints in the logical analysis of numerical data
Discrete Applied Mathematics
Learning bounds via sample width for classifiers on finite metric spaces
Theoretical Computer Science
Hi-index | 5.23 |
This paper concerns learning binary-valued functions defined on R, and investigates how a particular type of 'regularity' of hypotheses can be used to obtain better generalization error bounds. We derive error bounds that depend on the sample width (a notion analogous to that of sample margin for real-valued functions). This motivates learning algorithms that seek to maximize sample width.