Dynamic Tracking of Facial Expressions Using Adaptive, Overlapping Subspaces

  • Authors:
  • Dimitris Metaxas;Atul Kanaujia;Zhiguo Li

  • Affiliations:
  • Department of Computer Science, Rutgers University,;Department of Computer Science, Rutgers University,;Department of Computer Science, Rutgers University,

  • Venue:
  • ICCS '07 Proceedings of the 7th international conference on Computational Science, Part I: ICCS 2007
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a Dynamic Data Driven Application System (DDDAS)to track 2D shapes across large pose variations by learning non-linear shape manifold as overlapping, piecewise linear subspaces. The learned subspaces adaptively adjust to the subject by tracking the shapes independently using Kanade Lucas Tomasi(KLT) point tracker. The novelty of our approach is that the tracking of feature points is used to generate independent training examples for updating the learned shape manifold and the appearance model. We use landmark based shape analysis to train a Gaussian mixture model over the aligned shapes and learn a Point Distribution Model(PDM) for each of the mixture components. The target 2D shape is searched by first maximizing the mixture probability density for the local feature intensity profiles along the normal followed by constraining the global shape using the most probable PDM cluster. The feature shapes are robustly tracked across multiple frames by dynamically switching between the PDMs. The tracked 2D facial features are used deform the 3D face mask.The main advantage of the 3D deformable face models is the reduced dimensionality. The smaller number of degree of freedom makes the system more robust and enables capturing subtle facial expressions as change of only a few parameters. We demonstrate the results on tracking facial features and provide several empirical results to validate our approach. Our framework runs close to real time at 25 frames per second.