Combining Bagging, Boosting and Dagging for Classification Problems

  • Authors:
  • S. B. Kotsianti;D. Kanellopoulos

  • Affiliations:
  • Educational Software Development Laboratory, Department of Mathematics, University of Patras, P.A. Box: 1399, Rio 26 500,;Educational Software Development Laboratory, Department of Mathematics, University of Patras, P.A. Box: 1399, Rio 26 500,

  • Venue:
  • KES '07 Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bagging, boosting and dagging are well known re-sampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging and dagging on noise-free data. However, there are strong empirical indications that bagging and dagging are much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging, boosting and dagging ensembles with 8 sub-classifiers in each one. We performed a comparison with simple bagging, boosting and dagging ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.