Boosting Density Function Estimators

  • Authors:
  • Franck Thollard;Marc Sebban;Philippe Ezequel

  • Affiliations:
  • -;-;-

  • Venue:
  • ECML '02 Proceedings of the 13th European Conference on Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we focus on the adaptation of boosting to density function estimation, useful in a number of fields including Natural Language Processing and Computational Biology. Previously, boosting has been used to optimize classification algorithms, improving generalization accuracy by combining many classifiers. The core of the boosting strategy, in the well-known ADABOOST algorithm [4], consists in updating the learning instance distribution, increasing (resp. decreasing) the weight of misclassified (resp. correctly classified) examples by the current classifier. Except in [17, 18], few works have attempted to exploit interesting theoretical properties of boosting (such as margin maximization) independently of a classification task. In this paper, we do not take into account classification errors to optimize a classifier, but rather density estimation errors to optimize an estimator (here a probabilistic automaton) of a given target density. Experimental results are presented showing the interest of our approach.