Simulating speech with a physics-based facial muscle model

  • Authors:
  • Eftychios Sifakis;Andrew Selle;Avram Robinson-Mosher;Ronald Fedkiw

  • Affiliations:
  • Stanford University and Intel Corporation;Stanford University and Industrial Light+Magic;Stanford University;Stanford University and Industrial Light+Magic

  • Venue:
  • Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

We present a physically based system for creating animations of novel words and phrases from text and audio input based on the analysis of motion captured speech examples. Leading image based techniques exhibit photo-real quality, yet lack versatility especially with regard to interactions with the environment. Data driven approaches that use motion capture to deform a three dimensional surface often lack any anatomical or physically based structure, limiting their accuracy and realism. In contrast, muscle driven physics-based facial animation systems can trivially integrate external interacting objects and have the potential to produce very realistic animations as long as the underlying model and simulation framework are faithful to the anatomy of the face and the physics of facial tissue deformation. We start with a high resolution, anatomically accurate flesh and muscle model built for a specific subject. Then we translate a motion captured training set of speech examples into muscle activation signals, and subsequently segment those into intervals corresponding to individual phonemes. Finally, these samples are used to synthesize novel words and phrases. The versatility of our approach is illustrated by combining this novel speech content with various facial expressions, as well as interactions with external objects.