Asymptotic theory of finite dimensional normed spaces
Asymptotic theory of finite dimensional normed spaces
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Machine learning with data dependent hypothesis classes
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
The importance of convexity in learning with squared loss
IEEE Transactions on Information Theory
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Improving the sample complexity using global data
IEEE Transactions on Information Theory
Hi-index | 0.00 |
It has been recently shown that sharp generalization bounds can be obtained when the function class from which the algorithm chooses its hypotheses is "small" in the sense that the Rademacher averages of this function class are small. We show that a new more general principle guarantees good generalization bounds. The new principle requires that random coordinate projections of the function class evaluated on random samples are "small" with high probability and that the random class of functions allows symmetrization. As an example, we prove that this geometric property of the function class is exactly the reason why the two lately proposed frameworks, the luckiness (Shawe-Taylor et al., 1998) and the algorithmic luckiness (Herbrich and Williamson, 2002), can be used to establish generalization bounds.