Facial muscle adaptation for expression customization

  • Authors:
  • Yasushi Ishibashi;Hiroyuki Kubo;Akinobu Maejima;Demetri Terzopoulos;Shigeo Morishima

  • Affiliations:
  • Waseda University, Tokyo, Japan;Waseda University, Tokyo, Japan;Waseda University, Tokyo, Japan;University of California, Los Angeles, CA;Waseda University, Tokyo, Japan

  • Venue:
  • ACM SIGGRAPH 2007 posters
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are two major approaches to creating 3DCG facial expressions: The first is based on facial muscle simulation and the second is the blend-shape approach. The blend shape approach is more familiar to creators than the facial muscle approach when they synthesize the facial expressions of 3DCG characters. However, the facial muscle model has the advantage of being physics-based. It can, therefore, produce realistic facial expressions and create facial expressions using fewer parameters than the blend shape approach, thereby reducing processing time and computational requirements. We introduce a method which can be used to synthesize individual facial expressions based on the facial muscle model [Waters 1987].