Drift-correcting template update strategy for precision feature point tracking

  • Authors:
  • Xiaoming Peng;Mohammed Bennamoun;Qian Ma;Ying Lei;Qiheng Zhang;Wufan Chen

  • Affiliations:
  • College of Automation, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu, Sichuan 610054, China and School of Computer Science and Software En ...;School of Computer Science and Software Engineering, University of Western Australia, M002, 35 Stirling Highway, Crawley, WA 6009, Australia;5th Lab, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 350 P.O. Box, Chengdu, Sichuan 610209, China;College of Automation, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu, Sichuan 610054, China;5th Lab, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 350 P.O. Box, Chengdu, Sichuan 610209, China;College of Automation, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu, Sichuan 610054, China

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a drift-correcting template update strategy for precisely tracking a feature point in 2D image sequences. The proposed strategy greatly complements one of the latest published template update strategies by incorporating a robust non-rigid image registration step. Previous strategies use the first template to correct drifts in the current template; however, the drift still builds up when the first template becomes different from the current one particularly in a long image sequence. In our strategy the first template is updated timely when it is revealed to be quite different from the current template and henceforth the updated first template is used to correct template drifts in subsequent frames. Our method runs fast on a 3.0GHz desktop PC, using about 0.03 s on average to track a feature point in a frame (under the assumption of a general affine transformation model, 61x61 pixels in template size) and less than 0.1 s to update the first template. The proposed template update strategy can be implemented either serially or in parallel. Quantitative evaluation results show the proposed method in precision tracking of a distinctive feature point whose appearance is constantly changing. Qualitative evaluation results show that the proposed method has a more sustained ability to track a feature point than two previous template update strategies. We also revealed the limitations of the proposed template update strategy by tracking feature points on a human's face.