Photo-realistic facial expression synthesis

  • Authors:
  • John Ghent;John McDonald

  • Affiliations:
  • Department of Computer Science, National University of Ireland Maynooth, Ireland;Department of Computer Science, National University of Ireland Maynooth, Ireland

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper details a procedure for generating a function which maps an image of a neutral face to one depicting a desired expression independent of age, sex, or skin colour. Facial expression synthesis is a growing and relatively new domain within computer vision. One of the fundamental problems when trying to produce accurate expression synthesis in previous approaches is the lack of a consistent method for measuring expression. This inhibits the generation of a universal mapping function. This paper advances this domain by the introduction of the Facial Expression Shape Model (FESM) and the Facial Expression Texture Model (FETM). These are statistical models of facial expression based on anatomical analysis of expression called the Facial Action Coding System (FACS). The FESM and the FETM allow for the generation of a universal mapping function. These models provide a robust means for upholding the rules of the FACS and are flexible enough to describe subjects that are not present during the training phase. We use these models in conjunction with several Artificial Neural Networks (ANN) to generate photo-realistic images of facial expressions.