Automatic 3d facial expression analysis in videos

  • Authors:
  • Ya Chang;Marcelo Vieira;Matthew Turk;Luiz Velho

  • Affiliations:
  • Computer Science Department, University of California, Santa Barbara, CA;Instituto de Matemática Pura e Aplicada, Rio de Janeiro, RJ, Brazil;Computer Science Department, University of California, Santa Barbara, CA;Instituto de Matemática Pura e Aplicada, Rio de Janeiro, RJ, Brazil

  • Venue:
  • AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a novel framework for automatic 3D facial expression analysis in videos. Preliminary results demonstrate editing facial expression with facial expression recognition. We first build a 3D expression database to learn the expression space of a human face. The real-time 3D video data were captured by a camera/projector scanning system. From this database, we extract the geometry deformation independent of pose and illumination changes. All possible facial deformations of an individual make a nonlinear manifold embedded in a high dimensional space. To combine the manifolds of different subjects that vary significantly and are usually hard to align, we transfer the facial deformations in all training videos to one standard model. Lipschitz embedding embeds the normalized deformation of the standard model in a low dimensional generalized manifold. We learn a probabilistic expression model on the generalized manifold. To edit a facial expression of a new subject in 3D videos, the system searches over this generalized manifold for optimal replacement with the ‘target’ expression, which will be blended with the deformation in the previous frames to synthesize images of the new expression with the current head pose. Experimental results show that our method works effectively.