The nature of statistical learning theory
The nature of statistical learning theory
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Improved Generalization Through Explicit Optimization of Margins
Machine Learning
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
A Note on a Scale-Sensitive Dimension of Linear Bounded Functionals in Banach Spaces
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
IEEE Transactions on Information Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
Machine learning with data dependent hypothesis classes
The Journal of Machine Learning Research
Fat-Shattering of Affine Functions
Combinatorics, Probability and Computing
Hi-index | 0.00 |
In this paper we prove a result that is fundamental to the generalization properties of Vapnik's support vector machines and other large margin classifiers. In particular, we prove that the minimum margin over all dichotomies of k ≤ n + 1 points inside a unit ball in Rn is maximized when the points form a regular simplex on the unit sphere. We also provide an alternative proof directly in the framework of level fat shattering.