Visual tracking via dynamic tensor analysis with mean update

  • Authors:
  • Xiaoqin Zhang;Xingchu Shi;Weiming Hu;Xi Li;Steve Maybank

  • Affiliations:
  • College of Mathematics & Information Science, Wenzhou University, Zhejiang, China;National Laboratory of Pattern Recognition, CASIA, Beijing, China;National Laboratory of Pattern Recognition, CASIA, Beijing, China;National Laboratory of Pattern Recognition, CASIA, Beijing, China;Department of Computer Science and Information Systems, Birkbeck College, London, UK

  • Venue:
  • Neurocomputing
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

The appearance model is an important issue in the visual tracking community. Most subspace-based appearance models focus on the time correlation between the image observations of the object, but the spatial layout information of the object is ignored. This paper proposes a robust appearance model for visual tracking which effectively combines the spatial and temporal eigen-spaces of the object in a tensor reconstruction way. In order to capture the variations in object appearance, an incremental updating strategy is developed to both update the eigen-space and mean of the object. Experimental results demonstrate that, compared with the state-of-the-art appearance models in the tracking literature, the proposed appearance model is more robust and effective.