Multi-resolution boosting for classification and regression problems

  • Authors:
  • Chandan K. Reddy;Jin-Hyeong Park

  • Affiliations:
  • Wayne State University, Department of Computer Science, 48202, Detroit, MI, USA;Siemens Corporate Research, Integrated Data Systems Department, 08540, Princeton, NJ, USA

  • Venue:
  • Knowledge and Information Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Various forms of additive modeling techniques have been successfully used in many data mining and machine learning–related applications. In spite of their great success, boosting algorithms still suffer from a few open-ended problems that require closer investigation. The efficiency of any additive modeling technique relies significantly on the choice of the weak learners and the form of the loss function. In this paper, we propose a novel multi-resolution approach for choosing the weak learners during additive modeling. Our method applies insights from multi-resolution analysis and chooses the optimal learners at multiple resolutions during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. We demonstrate the advantages of this novel framework in both classification and regression problems and show results on both synthetic and real-world datasets taken from the UCI machine learning repository. Though demonstrated specifically in the context of boosting algorithms, our framework can be easily accommodated in general additive modeling techniques. Similarities and distinctions of the proposed algorithm with the popularly used methods like radial basis function networks are also discussed.