Manifold based analysis of facial expression

  • Authors:
  • Ya Chang;Changbo Hu;Rogerio Feris;Matthew Turk

  • Affiliations:
  • Computer Science Department, University of California, Santa Barbara, CA 93106, USA;Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA;Computer Science Department, University of California, Santa Barbara, CA 93106, USA;Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel approach for modeling, tracking, and recognizing facial expressions on a low-dimensional expression manifold. A modified Lipschitz embedding is developed to embed aligned facial features in a low-dimensional space, while keeping the main structure of the manifold. In the embedded space, a complete expression sequence becomes a path on the expression manifold, emanating from a center that corresponds to the neutral expression. As an offline training stage, facial contour features are first clustered in this space, using a mixture model. For each cluster in the low-dimensional space, a specific ASM model is learned, in order to avoid incorrect matching due to non-linear image variations. A probabilistic model of transitions between the clusters and paths in the embedded space is then learned. Given a new expression sequence, we use ICondensation to track facial features, while recognizing facial expressions simultaneously, within the common probabilistic framework. Experimental results demonstrate that our probabilistic facial expression model on the manifold significantly improves facial deformation tracking and expression recognition. We also synthesize image sequences of changing expressions through the manifold model.