Incremental pairwise discriminant analysis based visual tracking

  • Authors:
  • Jing Wen;Xinbo Gao;Xuelong Li;Dacheng Tao;Jie Li

  • Affiliations:
  • School of Electronic Engineering, Xidian University, No.2, South Taibai Road, Xi'an 710071, Shaanxi, P. R. China;School of Electronic Engineering, Xidian University, No.2, South Taibai Road, Xi'an 710071, Shaanxi, P. R. China;Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, ...;School of Computer Engineering, Nanyang Technological University, Singapore;School of Electronic Engineering, Xidian University, No.2, South Taibai Road, Xi'an 710071, Shaanxi, P. R. China

  • Venue:
  • Neurocomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

The distinguishment between the object appearance and the background is the useful cues available for visual tracking, in which the discriminant analysis is widely applied. However, due to the diversity of the background observation, there are not adequate negative samples from the background, which usually lead the discriminant method to tracking failure. Thus, a natural solution is to construct an object-background pair, constrained by the spatial structure, which could not only reduce the neg-sample number, but also make full use of the background information surrounding the object. However, this idea is threatened by the variant of both the object appearance and the spatial-constrained background observation, especially when the background shifts as the moving of the object. Thus, an incremental pairwise discriminant subspace is constructed in this paper to delineate the variant of the distinguishment. In order to maintain the correct the ability of correctly describing the subspace, we enforce two novel constraints for the optimal adaptation: (1) pairwise data discriminant constraint and (2) subspace smoothness. The experimental results demonstrate that the proposed approach can alleviate adaptation drift and achieve better visual tracking results for a large variety of nonstationary scenes.