Collaborative object tracking model with local sparse representation

  • Authors:
  • Chengjun Xie;Jieqing Tan;Peng Chen;Jie Zhang;Lei He

  • Affiliations:
  • School of Computer & Information, Hefei University of Technology, Hefei 230009, China and Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China;School of Computer & Information, Hefei University of Technology, Hefei 230009, China;Institute of Health Sciences, Anhui University, Hefei, Anhui 230601, China;Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China;School of Computer & Information, Hefei University of Technology, Hefei 230009, China

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

There existed many visual tracking methods that are based on sparse representation model, most of them were either generative or discriminative, which made object tracking more difficult when objects have undergone large pose change, illumination variation or partial occlusion. To address this issue, in this paper we propose a collaborative object tracking model with local sparse representation. The key idea of our method is to develop a local sparse representation-based discriminative model (SRDM) and a local sparse representation-based generative model (SRGM). In the SRDM module, the appearance of a target is modeled by local sparse codes that can be formed as training data for a linear classifier to discriminate the target from the background. In the SRGM module, the appearance of the target is represented by sparse coding histogram and a sparse coding-based similarity measure is applied to compute the distance between histograms of a target candidate and the target template. Finally, a collaborative similarity measure is proposed for measuring the difference of the two models, and then the corresponding likelihood of the target candidates is input into a particle filter framework to estimate the target state sequentially over time in visual tracking. Experiments on some publicly available benchmarks of video sequences showed that our proposed tracker is robust and effective.