Co-trained generative and discriminative trackers with cascade particle filter

  • Authors:
  • Thang Ba Dinh;Qian Yu;Gérard Medioni

  • Affiliations:
  • -;-;-

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual tracking is a challenging problem, as the appearance of an object may change due to viewpoint variations, illumination changes, and occlusion. It may also leave the field of view (FOV), then reappears. In order to track and reacquire an unknown object with limited labeling data, we propose to learn these changes online and incrementally build a model that encodes all appearance variations while tracking. To address this semi-supervised learning problem, we propose a co-training framework with cascade particle filter to label incoming data continuously and online update hybrid generative and discriminative models. Each of the layers in the cascade contains one or more either generative or discriminative appearance models. The cascade manner of organizing the particle filter enables the efficient evaluation of multiple appearance models with different computational costs; thus improves the speed of the tracker. The proposed online framework provides temporally local tracking that adapts to appearance changes. Moreover, it provides an object-specific detection ability that allows to reacquire an object after total occlusion. Extensive experiments demonstrate that under challenging situations, our method has strong reacquisition ability and robustness to distracters in clutter background. We also provide quantitative comparisons to other state of the art trackers.