Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
A novel dynamic fusion method using localized generalization error model
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Combining bagging, boosting, rotation forest and random subspace methods
Artificial Intelligence Review
Empirical comparison of bagging ensembles created using weak learners for a regression problem
ACIIDS'11 Proceedings of the Third international conference on Intelligent information and database systems - Volume Part II
ACIIDS'11 Proceedings of the Third international conference on Intelligent information and database systems - Volume Part II
Empirical comparison of resampling methods using genetic neural networks for a regression problem
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
KES-AMSTA'11 Proceedings of the 5th KES international conference on Agent and multi-agent systems: technologies and applications
Empirical comparison of resampling methods using genetic fuzzy systems for a regression problem
IDEAL'11 Proceedings of the 12th international conference on Intelligent data engineering and automated learning
Bagging classifiers for fighting poisoning attacks in adversarial classification tasks
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Investigation of random subspace and random forest methods applied to property valuation data
ICCCI'11 Proceedings of the Third international conference on Computational collective intelligence: technologies and applications - Volume Part I
Margin distribution based bagging pruning
Neurocomputing
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part I
Investigation of rotation forest method applied to property price prediction
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part I
Dynamic fusion method using Localized Generalization Error Model
Information Sciences: an International Journal
An attempt to employ genetic fuzzy systems to predict from a data stream of premises transactions
SUM'12 Proceedings of the 6th international conference on Scalable Uncertainty Management
How large should ensembles of classifiers be?
Pattern Recognition
Expert Systems with Applications: An International Journal
Investigation of random subspace and random forest regression models using data with injected noise
KES'12 Proceedings of the 16th international conference on Knowledge Engineering, Machine Learning and Lattice Computing with Applications
Integrated Fisher linear discriminants: An empirical study
Pattern Recognition
Hi-index | 0.15 |
We apply an analytical framework for the analysis of linearly combined classifiers to ensembles generated by bagging. This provides an analytical model of bagging misclassification probability as a function of the ensemble size, which is a novel result in the literature. Experimental results on real data sets confirm the theoretical predictions. This allows us to derive a novel and theoretically grounded guideline for choosing bagging ensemble size. Furthermore, our results are consistent with explanations of bagging in terms of classifier instability and variance reduction, support the optimality of the simple average over the weighted average combining rule for ensembles generated by bagging, and apply to other randomization-based methods for constructing classifier ensembles. Although our results do not allow to compare bagging misclassification probability with the one of an individual classifier trained on the \textit{original} training set, we discuss how the considered theoretical framework could be exploited to this aim.