Online spatio-temporal structural context learning for visual tracking

  • Authors:
  • Longyin Wen;Zhaowei Cai;Zhen Lei;Dong Yi;Stan Z. Li

  • Affiliations:
  • CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China;CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China;CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China;CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China;CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual tracking is a challenging problem, because the target frequently change its appearance, randomly move its location and get occluded by other objects in unconstrained environments. The state changes of the target are temporally and spatially continuous, in this paper therefore, a robust Spatio-Temporal structural context based Tracker (STT) is presented to complete the tracking task in unconstrained environments. The temporal context capture the historical appearance information of the target to prevent the tracker from drifting to the background in a long term tracking. The spatial context model integrates contributors, which are the key-points automatically discovered around the target, to build a supporting field. The supporting field provides much more information than appearance of the target itself so that the location of the target will be predicted more precisely. Extensive experiments on various challenging databases demonstrate the superiority of our proposed tracker over other state-of-the-art trackers.