Boosting Shift-Invariant Features

  • Authors:
  • Thomas Hörnlein;Bernd Jähne

  • Affiliations:
  • Heidelberg Collaboratory for Image Processing, University of Heidelberg, Heidelberg, Germany 69115;Heidelberg Collaboratory for Image Processing, University of Heidelberg, Heidelberg, Germany 69115

  • Venue:
  • Proceedings of the 31st DAGM Symposium on Pattern Recognition
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work presents a novel method for training shift-invariant features using a Boosting framework. Features performing local convolutions followed by subsampling are used to achieve shift-invariance. Other systems using this type of features, e.g. Convolutional Neural Networks, use complex feed-forward networks with multiple layers. In contrast, the proposed system adds features one at a time using smoothing spline base classifiers. Feature training optimizes base classifier costs. Boosting sample-reweighting ensures features to be both descriptive and independent. Our system has a lower number of design parameters as comparable systems, so adapting the system to new problems is simple. Also, the stage-wise training makes it very scalable. Experimental results show the competitiveness of our approach.