Spatio-temporal graphical-model-based multiple facial feature tracking

  • Authors:
  • Congyong Su;Li Huang

  • Affiliations:
  • College of Computer Science, Zhejiang University, Hangzhou, China;College of Computer Science, Zhejiang University, Hangzhou, China

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is challenging to track multiple facial features simultaneously when rich expressions are presented on a face. We propose a two-step solution. In the first step, several independent condensation-style particle filters are utilized to track each facial feature in the temporal domain. Particle filters are very effective for visual tracking problems; however multiple independent trackers ignore the spatial constraints and the natural relationships among facial features. In the second step, we use Bayesian inference--belief propagation--to infer each facial feature's contour in the spatial domain, in which we learn the relationships among contours of facial features beforehand with the help of a large facial expression database. The experimental results show that our algorithm can robustly track multiple facial features simultaneously, while there are large interframe motions with expression changes.