Random patch based video tracking via boosting the relative spaces

  • Authors:
  • Duowen Chen;Jing Zhang;Ming Tang

  • Affiliations:
  • Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

  • Venue:
  • ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • Year:
  • 2009
  • Analysis of MILTrack

    Proceedings of the 4th International Conference on Internet Multimedia Computing and Service

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new visual tracking method based on the recently popular tracking-as-classification idea. We concentrate on exploring the intra-class variance of the foreground target to construct and update a classification based tracker. In our approach, foreground target is represented by a set of model patches. Different types of features are jointly used to represent those patches. Individual weak learners are trained based on each model patch's relative space. AdaBoost framework is applied to choose those weak classifiers to combine a strong classifier as the tracker for next frame. Moreover, with the new tracking result, the tracker is adjusted adaptively according to the change of scene to keep itself discriminative during the entire sequence. We demonstrate the effectiveness of our approach with comparison results on common video sequences.