Facial type, expression, and viseme generation

  • Authors:
  • James Skorupski;Jerry Yee;Josh McCoy;James Davis

  • Affiliations:
  • University of California, Santa Cruz;University of California, Santa Cruz;University of California, Santa Cruz;University of California, Santa Cruz

  • Venue:
  • ACM SIGGRAPH 2007 posters
  • Year:
  • 2007

Quantified Score

Hi-index 0.02

Visualization

Abstract

The process of generating facial models and various poses of these models is a necessary part of most present-day movies, and usually required for any interactive game which features humans as a primary character. The generation of this face data can be approached in ways varying from pure computation to pure data acquisition. Computational models are flexible but can lack realism and intuitive or simple controls, while data-driven models produce realistic faces, but necessitate the often slow and cumbersome capture of new scan data for every desired set of face attributes. Our method is a hybrid approach, which combines a relatively small set of real world facial data with a computational algorithm that learns the underlying variations in this geometric information automatically. Given a sparse data set that spans variation in viseme, face type, and expression, we are able to generate new faces that exhibit combinations of these attributes, and were never part of the original data set. We rely on user-assisted categorization of our sparse data set to associate each piece of face data with a small set of attribute contributions, and then use this categorization data as a guide for binding abstract variation to concrete parameters. This process takes the complex, subtle, and often subjective qualities associated with visemes, expressions, and face types, and correlates them to known geometric features, in order to facilitate the creation of entirely new face poses.