A lower bound for 0,1, * tournament codes
Discrete Mathematics
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Characterizations of learnability for classes of {0, …, n}-valued functions
Journal of Computer and System Sciences
A generalization of Sauer's lemma
Journal of Combinatorial Theory Series A
Bounds on the number of examples needed for learning functions
Euro-COLT '93 Proceedings of the first European conference on Computational learning theory
More theorems about scale-sensitive dimensions and learning
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Fat-shattering and the learnability of real-valued functions
Journal of Computer and System Sciences
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
A framework for structural risk minimisation
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
A PAC analysis of a Bayesian estimator
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
On the sample complexity of learning functions with bounded variation
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Further results on the margin distribution
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Enlarging the Margins in Perceptron Decision Trees
Machine Learning
On the VC Dimension of Bounded Margin Classifiers
Machine Learning
Generalization Ability of Folding Networks
IEEE Transactions on Knowledge and Data Engineering
Learnability in Hilbert spaces with reproducing kernels
Journal of Complexity
Mathematical Modelling of Generalization
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Entropy Numbers, Operators and Support Vector Kernels
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
On the Vgamma Dimension for Regression in Reproducing Kernel Hilbert Spaces
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
A Note on the Generalization Performance of Kernel Classifiers with Margin
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
On Agnostic Learning with {0, *, 1}-Valued and Real-Valued Hypotheses
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Data-Dependent Margin-Based Generalization Bounds for Classification
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Geometric Methods in the Analysis of Glivenko-Cantelli Classes
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Bounds on the Generalization Ability of Bayesian Inference and Gibbs Algorithms
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Using the Pseudo-Dimension to Analyze Approximation Algorithms for Integer Programming
WADS '01 Proceedings of the 7th International Workshop on Algorithms and Data Structures
Generalization Performance of Classifiers in Terms of Observed Covering Numbers
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Entropy, Combinatorial Dimensions and Random Averages
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
A few notes on statistical learning theory
Advanced lectures on machine learning
A short introduction to learning with kernels
Advanced lectures on machine learning
On the size of convex hulls of small sets
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Data-dependent margin-based generalization bounds for classification
The Journal of Machine Learning Research
A note on different covering numbers in learning theory
Journal of Complexity
Efficient algorithms for learning functions with bounded variation
Information and Computation
Function Learning from Interpolation
Combinatorics, Probability and Computing
Fat-Shattering of Affine Functions
Combinatorics, Probability and Computing
Generalization Error Bounds for Threshold Decision Lists
The Journal of Machine Learning Research
On the Importance of Small Coordinate Projections
The Journal of Machine Learning Research
Small size quantum automata recognizing some regular languages
Theoretical Computer Science - The art of theory
The performance bounds of learning machines based on exponentially strongly mixing sequences
Computers & Mathematics with Applications
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets
Discrete Applied Mathematics
Aspects of discrete mathematics and probability in the theory of machine learning
Discrete Applied Mathematics
Obtaining fast error rates in nonconvex situations
Journal of Complexity
VC Theory of Large Margin Multi-Category Classifiers
The Journal of Machine Learning Research
Using domain-specific knowledge in generalization error bounds for support vector machine learning
Decision Support Systems
Maximal width learning of binary functions
Theoretical Computer Science
A full characterization of quantum advice
Proceedings of the forty-second ACM symposium on Theory of computing
Simulation-based optimization of Markov decision processes: An empirical process theory approach
Automatica (Journal of IFAC)
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Inapproximability for VCG-based combinatorial auctions
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Universal ε-approximators for integrals
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Differential privacy and the fat-shattering dimension of linear queries
APPROX/RANDOM'10 Proceedings of the 13th international conference on Approximation, and 14 the International conference on Randomization, and combinatorial optimization: algorithms and techniques
Approximation stability and boosting
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Learnability, Stability and Uniform Convergence
The Journal of Machine Learning Research
Tree Decomposition for Large-Scale SVM Problems
The Journal of Machine Learning Research
Comparing distributions and shapes using the kernel distance
Proceedings of the twenty-seventh annual symposium on Computational geometry
Context sensitive information: model validation by information theory
MCPR'11 Proceedings of the Third Mexican conference on Pattern recognition
Metric anomaly detection via asymmetric risk minimization
SIMBAD'11 Proceedings of the First international conference on Similarity-based pattern recognition
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part I
Data reduction for weighted and outlier-resistant clustering
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
Learnability of bipartite ranking functions
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Automatica (Journal of IFAC)
Learning with stochastic inputs and adversarial outputs
Journal of Computer and System Sciences
Application of integral operator for regularized least-square regression
Mathematical and Computer Modelling: An International Journal
Bisimulation Metrics for Continuous Markov Decision Processes
SIAM Journal on Computing
Analysis of a multi-category classifier
Discrete Applied Mathematics
Learnability beyond uniform convergence
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
PAC learnability under non-atomic measures: A problem by Vidyasagar
Theoretical Computer Science
The approximate rank of a matrix and its algorithmic applications: approximate rank
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Multi-instance learning with any hypothesis class
The Journal of Machine Learning Research
Efficient regression in metric spaces via approximate lipschitz extension
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
Uniform convergence, stability and learnability for ranking problems
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.01 |
Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.