Robust and efficient regularized boosting using total Bregman divergence

  • Authors:
  • Meizhu Liu;B. C. Vemuri

  • Affiliations:
  • CISE, Univ. of Florida, Gainesville, FL, USA;CISE, Univ. of Florida, Gainesville, FL, USA

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized boosting algorithms in literature. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases along with comparisons to several other state-of-the-art methods. Numerical results depict much improvement in efficiency and accuracy over competing methods.