Robust visual tracking based on online learning sparse representation

  • Authors:
  • Shengping Zhang;Hongxun Yao;Huiyu Zhou;Xin Sun;Shaohui Liu

  • Affiliations:
  • School of Computer Science and Technology, Harbin Institute of Technology, China;School of Computer Science and Technology, Harbin Institute of Technology, China;Institute of Electronics, Communications and Information Technology, Queen's University Belfast, United Kingdom;School of Computer Science and Technology, Harbin Institute of Technology, China;School of Computer Science and Technology, Harbin Institute of Technology, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: (1) being capable of discriminating the tracked target from its background, (2) being robust to the target's appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target's appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via @?"1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.