Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
A sharp concentration inequality with application
Random Structures & Algorithms
Polynomial-time approximation schemes for geometric graphs
SODA '01 Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Model Selection and Error Estimation
Machine Learning
Clique is hard to approximate within n1-
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
An introduction to boosting and leveraging
Advanced lectures on machine learning
Greedy algorithms for classification—consistency, convergence rates, and adaptivity
The Journal of Machine Learning Research
Theoretical Computer Science - Computing and combinatorics
Aspects of discrete mathematics and probability in the theory of machine learning
Discrete Applied Mathematics
Maximal Discrepancy for Support Vector Machines
Neurocomputing
On the doubt about margin explanation of boosting
Artificial Intelligence
Hi-index | 0.00 |
We derive new margin-based inequalities for the probability of error of classifiers. The main feature of these bounds is that they can be calculated using the training data and therefore may be effectively used for model selection purposes. In particular, the bounds involve empirical complexities measured on the training data (such as the empirical fat-shattering dimension) as opposed to their worst-case counterparts traditionally used in such analyses. Also, our bounds appear to be sharper and more general than recent results involving empirical complexity measures. In addition, we develop an alternative data-based bound for the generalization error of classes of convex combinations of classifiers involving an empirical complexity measure that is easier to compute than the empirical covering number or fat-shattering dimension. We also show examples of efficient computation of the new bounds.