An integrated framework for face modeling, facial motion analysis and synthesis

  • Authors:
  • Pengyu Hong;Zhen Wen;Thomas Huang

  • Affiliations:
  • University of Illinois at Urbana Champaign, Urbana, IL, USA;University of Illinois at Urbana Champaign, Urbana, IL, USA;University of Illinois at Urbana Champaign, Urbana, IL, USA

  • Venue:
  • MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an integrated framework for face modeling, facial motion analysis and synthesis. This framework systematically addresses three closely related research issues: (1) selecting a quantitative visual representation for face modeling and face animation; (2) automatic facial motion analysis based on the same visual representation; and (3) speech to facial coarticulation modeling. The framework provides a guideline for methodically building a face modeling and animation system. The systematicness of the framework is reflected by the links among its components, whose details are presented. Based on this framework, we improved a face modeling and animation system, called the iFACE system [4]. The final system provides functionalities for customizing a generic face model for an individual, text driven face animation, off-line speech driven face animation, and real-time speech driven face animation.