Boosting a weak learning algorithm by majority
Information and Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
On the Effectiveness of Diversity When Training Multiple Classifier Systems
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Adaptive learning of nominal concepts for supervised classification
KES'10 Proceedings of the 14th international conference on Knowledge-based and intelligent information and engineering systems: Part I
“Good” and “bad” diversity in majority vote ensembles
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Hi-index | 0.00 |
In this paper, we investigate how the diversity of nominal classifier ensembles affects the AdaBoost performance [13]. Using 5 real data sets from the UCI Machine Learning Repository and 3 different diversity measures, we show that $\mathcal{Q}$ Statistic measure is mostly correlated with AdaBoost performance for 2-class problems. The experimental results suggest that the performance of AdaBoost depend on the nominal classifier diversity that can be used as a stopping criteria in ensemble learning.