Dynamic, expressive speech animation from a single mesh

  • Authors:
  • Kevin Wampler;Daichi Sasaki;Li Zhang;Zoran Popović

  • Affiliations:
  • Univeristy of Washington;Sony;Columbia University;Univeristy of Washington

  • Venue:
  • SCA '07 Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.