A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
The Journal of Machine Learning Research
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Stability of Randomized Learning Algorithms
The Journal of Machine Learning Research
Bootstrapping rule induction to achieve rule stability and reduction
Journal of Intelligent Information Systems
Stability of Unstable Learning Algorithms
Machine Learning
Stability Properties of Empirical Risk Minimization over Donsker Classes
The Journal of Machine Learning Research
Semi-analytical method for analyzing models and model selection measures based on moment analysis
ACM Transactions on Knowledge Discovery from Data (TKDD)
Robustness of reweighted Least Squares Kernel Based Regression
Journal of Multivariate Analysis
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
Approximation stability and boosting
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Learnability, Stability and Uniform Convergence
The Journal of Machine Learning Research
SIAM Journal on Computing
The bounds on the rate of uniform convergence for learning machine
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part I
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Stability and generalization of bipartite ranking algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Generalization error bounds using unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Permutation tests for classification
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Geometry of online packing linear programs
ICALP'12 Proceedings of the 39th international colloquium conference on Automata, Languages, and Programming - Volume Part I
A statistical view of clustering performance through the theory of U-processes
Journal of Multivariate Analysis
Hi-index | 0.00 |
We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability. The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.