Algorithmics: theory & practice
Algorithmics: theory & practice
Machine Learning
An Efficient Method To Estimate Bagging‘s Generalization Error
Machine Learning
Inference for the Generalization Error
Machine Learning
A Monte Carlo analysis of ensemble classification
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Experimental comparison between bagging and Monte Carlo ensemble classification
ICML '05 Proceedings of the 22nd international conference on Machine learning
Monte Carlo theory as an explanation of bagging and boosting
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
How large should ensembles of classifiers be?
Pattern Recognition
Hi-index | 0.00 |
The dependence of the classification error on the size of a bagging ensemble can be modeled within the framework of Monte Carlo theory for ensemble learning. These error curves are parametrized in terms of the probability that a given instance is misclassified by one of the predictors in the ensemble. Out of bootstrap estimates of these probabilities can be used to model generalization error curves using only information from the training data. Since these estimates are obtained using a finite number of hypotheses, they exhibit fluctuations. This implies that the modeled curves are biased and tend to overestimate the true generalization error. This bias becomes negligible as the number of hypotheses used in the estimator becomes sufficiently large. Experiments are carried out to analyze the consistency of the proposed estimator.