A feature-based approach to facial expression cloning: Virtual Humans and Social Agents

  • Authors:
  • Bongcheol Park;Heejin Chung;Tomoyuki Nishita;Sung Yong Shin

  • Affiliations:
  • TCLab—Korea Advanced Institute of Science and Technology 373-1, Guseong-dong, Yuseong-gu Daejeon 305-701, Korea.;-;-;-

  • Venue:
  • Computer Animation and Virtual Worlds - CASA 2005
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a feature-based approach to cloning facial expressions from an input face model to an output model, using predefined source key-models and the corresponding target key-models. Adopting a scattered data interpolation technique, our approach consists of two parts: analysis of face key-models and synthesis of facial expressions. In the analysis part carried out once at the beginning, key-models are segmented automatically into five regions, each containing one of five facial features, that is, eyes, cheeks, and the mouth, which give rise to five sets of source key-shapes and the corresponding sets of target key-shapes. Using the key-shapes of each source feature, those of the corresponding target feature are parameterized. In the synthesis part, given a sequence of face models comprising an input animation, five output features are obtained separately by blending their own target key-shapes. These separately produced features are combined to synthesize the output face model at each frame. Our feature-based approach enables cloning of diverse expressions including asymmetric ones convincingly with a small number of face key-models while exhibiting an on-line, real-time performance. Copyright © 2005 John Wiley & Sons, Ltd.