Facial expression analysis using nonlinear decomposable generative models

  • Authors:
  • Chan-Su Lee;Ahmed Elgammal

  • Affiliations:
  • Computer Science, Rutgers University, Piscataway, NJ;Computer Science, Rutgers University, Piscataway, NJ

  • Venue:
  • AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new framework to represent and analyze dynamic facial motions using a decomposable generative model. In this paper, we consider facial expressions which lie on a one dimensional closed manifold, i.e., start from some configuration and coming back to the same configuration, while there are other sources of variability such as different classes of expression, and different people, etc., all of which are needed to be parameterized. The learned model supports tasks such as facial expression recognition, person identification, and synthesis. We aim to learn a generative model that can generate different dynamic facial appearances for different people and for different expressions. Given a single image or a sequence of images, we can use the model to solve for the temporal embedding, expression type and person identification parameters. As a result we can directly infer intensity of facial expression, expression type, and person identity from the visual input. The model can successfully be used to recognize expressions performed by different people never seen during training. We show experiment results for applying the framework for simultaneous face and facial expression recognition. Sub-categories: 1.1 Novel algorithms, 1.6 Others: modeling facial expression.