Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Equivalence of models for polynomial learnability
Information and Computation
Learning stochastic functions by smooth simultaneous estimation
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Learning with unreliable boundary queries
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Fat-shattering and the learnability of real-valued functions
Journal of Computer and System Sciences
Using and combining predictors that specialize
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Prediction, learning, uniform convergence, and scale-sensitive dimensions
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
The Computational Complexity of Densest Region Detection
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We consider the problem of classification using a variant of the agnostic learning model in which the algorithm's hypothesis is evaluated by comparison with hypotheses that do not classify all possible instances. Such hypotheses are formalized as functions from the instance space X to {0, *, 1}, where * is interpreted as "don't know". We provide a characterization of the sets of {0, *, 1}-valued functions that are learnable in this setting. Using a similar analysis, we improve on sufficient conditions for a class of real-valued functions to be agnostically learnable with a particular relative accuracy; in particular, we improve by a factor of two the scale at which scale-sensitive dimensions must be finite in order to imply learnability.