Online updating appearance generative mixture model for meanshift tracking

  • Authors:
  • Jilin Tu;Hai Tao;Thomas Huang

  • Affiliations:
  • Elec. and Comp. Engr. Dept., Univ. of Illinois at Urbana and Champaign, Urbana, IL;Elec. Engr. Dept., Univ. of Calif. at Santa Cruz, Santa Cruz, CA;Elec. and Comp. Engr. Dept., Univ. of Illinois at Urbana and Champaign, Urbana, IL

  • Venue:
  • ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes an appearance generative mixture model based on key frames for meanshift tracking. Meanshift tracking algorithm tracks object by maximizing the similarity between the histogram in tracking window and a static histogram acquired at the beginning of tracking. The tracking therefore may fail if the appearance of the object varies substantially. Assume the key appearances of the object can be acquired before tracking, the manifold of the object appearance can be approximated by some piece-wise linear combination of these key appearances in histogram space. The generative process can be described by a bayesian graphical model. Online EM algorithm is then derived to estimate the model parameters and to update the appearance histogram. The updating histogram would improve meanshift tracking accuracy and reliability, and the model parameters infer the state of the object with respect to the key appearances. We applied this approach to track human head motion and to infer the head pose simultaneously in videos. Experiments verify that, our online histogram generative updating algorithm constrained by key appearance histograms avoids the drifting problem often encountered in tracking with online updating, that the enhanced meanshift algorithm is capable of tracking object of varying appearances more robustly and accurately, and that our tracking algorithm can infer the state of the object(e.g. pose) simultaneously as a bonus.