A result of Vapnik with applications
Discrete Applied Mathematics
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
A sharp concentration inequality with application
Random Structures & Algorithms
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Model Selection and Error Estimation
Machine Learning
Generalization Performance of Classifiers in Terms of Observed Covering Numbers
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
IEEE Transactions on Information Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We derive new margin-based inequalities for the probability of error of classifiers.The main feature of these bounds is that they can be calculated using the training data and therefore may be effectively used for model selection purposes.In particular, the bounds involve quantities such as the empirical fat-shattering dimension and covering number measured on the training data, as opposed to their worst-case counterparts traditionally used in such analyses, and appear to be sharper and more general than recent results involving empirical complexity measures.In addition, we also develop an alternative data-based bound for the generalization error of classes of convex combinations of classifiers involving an empirical complexity measure that is more easily computable than the empirical covering number or fat-shattering dimension.W e also show an example of efficient computation of the new bounds.