Animating visible speech and facial expressions

  • Authors:
  • Jiyong Ma;Ronald Cole

  • Affiliations:
  • University of Colorado at Boulder, Center for Spoken Language Research, USA;University of Colorado at Boulder, Center for Spoken Language Research, USA

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present four techniques for modeling and animating faces starting from a set of morph targets. The first technique involves obtaining parameters to control individual facial components and learning the mapping from one type of parameter to another through machine learning techniques. The second technique is to fuse visible speech and facial expressions in the lower part of a face. The third technique combines coarticulation rules and kernel smoothing techniques. Finally, a new 3D tongue model with flexible and intuitive skeleton controls is presented. The results of eight animated character models demonstrate that these techniques are powerful and effective.