Algorithmics: theory & practice
Algorithmics: theory & practice
The Strength of Weak Learnability
Machine Learning
Machine Learning
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Some Theoretical Aspects of Boosting in the Presence of Noisy Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Logistic Regression, AdaBoost and Bregman Distances
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Monte Carlo theory as an explanation of bagging and boosting
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Experimental comparison between bagging and Monte Carlo ensemble classification
ICML '05 Proceedings of the 22nd international conference on Machine learning
Pruning in ordered bagging ensembles
ICML '06 Proceedings of the 23rd international conference on Machine learning
Out of bootstrap estimation of generalization error curves in bagging ensembles
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Inference on the prediction of ensembles of infinite size
Pattern Recognition
Hypothesis diversity in ensemble classification
ISMIS'06 Proceedings of the 16th international conference on Foundations of Intelligent Systems
Hi-index | 0.00 |
In this paper we extend previous results providing a theoretical analysis of a new Monte Carlo ensemble classifier. The framework allows us to characterize the conditions under which the ensemble approach can be expected to outperform the single hypothesis classifier. Moreover, we provide a closed form expression for the distribution of the true ensemble accuracy, as well as of its mean and variance. We then exploit this result in order to analyze the expected error behavior in a particularly interesting case.