Entropy Regularized LPBoost

  • Authors:
  • Manfred K. Warmuth;Karen A. Glocer;S. V. Vishwanathan

  • Affiliations:
  • Computer Science Department, University of California, Santa Cruz, U.S.A CA 95064;Computer Science Department, University of California, Santa Cruz, U.S.A CA 95064;NICTA, Canberra, Australia ACT 2601

  • Venue:
  • ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic.By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation.