Ensemble Methods in Machine Learning

  • Authors:
  • Thomas G. Dietterich

  • Affiliations:
  • -

  • Venue:
  • MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
  • Year:
  • 2000

Quantified Score

Hi-index 0.02

Visualization

Abstract

Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.