Improving boosting by exploiting former assumptions

  • Authors:
  • Emna Bahri;Nicolas Nicoloyannis;Mondher Maddouri

  • Affiliations:
  • Laboratory Eric, University of Lyon 2, Bron Cedex, France;Laboratory Eric, University of Lyon 2, Bron Cedex, France;Laboratory Eric, University of Lyon 2, Bron Cedex, France

  • Venue:
  • MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The error reduction in generalization is one of the principal motivations of research in machine learning. Thus, a great number of work is carried out on the classifiers aggregation methods in order to improve generally, by voting techniques, the performance of a single classifier. Among these methods of aggregation, we find the Boosting which is most practical thanks to the adaptive update of the distribution of the examples aiming at increasing in an exponential way the weight of the badly classified examples. However, this method is blamed because of overfitting, and the convergence speed especially with noise. In this study, we propose a new approach and modifications carried out on the algorithm of AdaBoost. We will demonstrate that it is possible to improve the performance of the Boosting, by exploiting assumptions generated with the former iterations to correct the weights of the examples. An experimental study shows the interest of this new approach, called hybrid approach.