Graph-based transductive learning for robust visual tracking

  • Authors:
  • Yufei Zha;Yuan Yang;Duyan Bi

  • Affiliations:
  • Signal and Information Processing Lab, Engineering College of Air Force Engineering University, Xi'an, China;Signal and Information Processing Lab, Engineering College of Air Force Engineering University, Xi'an, China;Signal and Information Processing Lab, Engineering College of Air Force Engineering University, Xi'an, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

In object tracking problem, most methods assume brightness constancy or subspace constancy, which are violated in practice. In this paper, the object tracking problem is considered as a transductive learning problem and a robust tracking method is proposed under intrinsic and extrinsic varieties. The object not only fits the object model, but also has the same cluster with the previous objects, which are the labeled data. By constraining the global and local information, the cost function is constructed firstly. The solution for minimizing the cost function can be solved by a simple linear algebra with graph Laplacian. Moreover, a novel graph is constructed over the positive samples and candidate patches, which can simultaneously learn the object's global appearance model and the local intrinsic geometric structure of all the patches. Furthermore, a heuristic positive samples selection scheme is adopted to make the method more effective. The proposed method is tested on different videos, which undergo large pose, expression, illumination and partial occlusion, and compared with state-of-the-art algorithms. Experimental results and comparative studies are provided to demonstrate the efficiency of the proposed method.