Scale-Space Kernels for Additive Modeling

  • Authors:
  • Chandan K. Reddy;Jin-Hyeong Park

  • Affiliations:
  • Department of Computer Science, Wayne State University, Detroit, USA MI-48202;Integrated Data Systems Department, Siemens Corporate Research, Princeton, USA NJ-08540

  • Venue:
  • SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Various forms of additive modeling techniques have been popularly used in many pattern recognition and machine learning related applications. The efficiency of any additive modeling technique relies significantly on the choice of the weak learner and the form of the loss function. In this paper, we propose a novel scale-space kernel based approach for additive modeling. Our method applies a few insights from the well-studied scale-space theory for choosing the optimal learners during different iterations of the boosting algorithms, which are simple yet powerful additive modeling methods. For each iteration of the additive modeling, weak learners that can best fit the current resolution of the data are chosen and then increase the resolution systematically. We demonstrate the results of the proposed framework on both synthetic and real datasets taken from the UCI machine learning repository. Though demonstrated specifically in the context of boosting algorithms, our approach is generic enough to be accommodated in general additive modeling techniques. Similarities and distinctions of the proposed algorithm with the popularly used radial basis function networks and wavelet decomposition method are also discussed.