Robust Visual Tracking Based on an Effective Appearance Model

  • Authors:
  • Xi Li;Weiming Hu;Zhongfei Zhang;Xiaoqin Zhang

  • Affiliations:
  • National Laboratory of Pattern Recognition, CASIA, Beijing, China;National Laboratory of Pattern Recognition, CASIA, Beijing, China;State University of New York, Binghamton, USA NY 13902;National Laboratory of Pattern Recognition, CASIA, Beijing, China

  • Venue:
  • ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most existing appearance models for visual tracking usually construct a pixel-based representation of object appearance so that they are incapable of fully capturing both global and local spatial layout information of object appearance. In order to address this problem, we propose a novel spatial Log-Euclidean appearance model (referred as SLAM) under the recently introduced Log-Euclidean Riemannian metric [23]. SLAM is capable of capturing both the global and local spatial layout information of object appearance by constructing a block-based Log-Euclidean eigenspace representation. Specifically, the process of learning the proposed SLAM consists of five steps--appearance block division, online Log-Euclidean eigenspace learning, local spatial weighting, global spatial weighting, and likelihood evaluation. Furthermore, a novel online Log-Euclidean Riemannian subspace learning algorithm (IRSL) [14] is applied to incrementally update the proposed SLAM. Tracking is then led by the Bayesian state inference framework in which a particle filter is used for propagating sample distributions over the time. Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed SLAM.