Machine Learning
The Journal of Machine Learning Research
Almost-everywhere algorithmic stability and generalization error
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Evaluation of Stability of k-Means Cluster Ensembles with Respect to Random Initialization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Zone analysis: a visualization framework for classification problems
Artificial Intelligence Review
Effect of Subsampling Rate on Subbagging and Related Ensembles of Stable Classifiers
PReMI '09 Proceedings of the 3rd International Conference on Pattern Recognition and Machine Intelligence
Two bagging algorithms with coupled learners to encourage diversity
IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
SIAM Journal on Computing
Hi-index | 0.00 |
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms. We give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.