Is a Greedy Covering Strategy an Extreme Boosting?

  • Authors:
  • Roberto Esposito;Lorenza Saitta

  • Affiliations:
  • -;-

  • Venue:
  • ISMIS '02 Proceedings of the 13th International Symposium on Foundations of Intelligent Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. Relation between the two approaches allows Adaboost's example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. The greedy covering strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable to that exhibited by AdaBoost.