Analysis, synthesis, and retargeting of facial expressions

  • Authors:
  • Christoph Bregler;Erika S. Chuang

  • Affiliations:
  • -;-

  • Venue:
  • Analysis, synthesis, and retargeting of facial expressions
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer animated characters have recently gained popularity in many applications, including web pages, computer games, movies, and various human computer interface designs. In order to make these animated characters lively and convincing, they require sophisticated facial expressions and motions. Traditionally, these animations are produced entirely by skilled artists. Although the quality of manually produced animation remains the best, this process is slow and costly. Motion capture performance of actors and actresses is one technique that attempts to speed up this process. One problem with this technique is that the captured motion data can not be edited easily. In recent years, statistical techniques have been used to address this problem by learning the mapping between audio speech and facial motion. New facial motion can be synthesized for novel audio data by reusing the motion capture data. However, since facial expressions are not modeled in these approaches, the resulting facial animation is realistic, yet expressionless. This thesis takes an expressionless talking face and creates an expressive facial animation. This process consists of three parts: expression synthesis, blendshape retargeting, and head motion synthesis. Expression synthesis uses a factorization model to describe the interaction between facial expression and speech content underlying each particular facial appearance. A new facial expression can be applied to novel input video, while retaining the same speech content. Blendshape retargeting maps facial expressions onto a 3D face model using the framework of blendshape interpolation. Three methods of sampling the keyshapes, or the prototype shapes, from data are evaluated. In addition, the generality of blendshape retargeting is demonstrated in three different domains. Head motion synthesis uses audio pitch contours to derive new head motion. The global and local statistics of the pitch and the coherency of head motion are utilized to determine the optimal motion trajectory. Finally, expression synthesis, blendshape retargeting, and head motion synthesis are combined into a prototype system and demonstrated through an example.