An example-based approach for facial expression cloning

  • Authors:
  • Hyewon Pyun;Yejin Kim;Wonseok Chae;Hyung Woo Kang;Sung Yong Shin

  • Affiliations:
  • Korea Advanced Institute of Science and Technology;Electronics and Telecommunications Research Institute;Korea Advanced Institute of Science and Technology;Korea Advanced Institute of Science and Technology;Korea Advanced Institute of Science and Technology

  • Venue:
  • Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a novel example-based approach for cloning facial expressions of a source model to a target model while reflecting the characteristic features of the target model in the resulting animation. Our approach comprises three major parts: key-model construction, parameterization, and expression blending. We first present an effective scheme for constructing key-models. Given a set of source example key-models and their corresponding target key-models created by animators, we parameterize the target key-models using the source key-models and predefine the weight functions for the parameterized target key-models based on radial basis functions. In runtime, given an input model with some facial expression, we compute the parameter vector of the corresponding output model, to evaluate the weight values for the target key-models and obtain the output model by blending the target key-models with those weights. The resulting animation preserves the facial expressions of the input model as well as the characteristic features of the target model specified by animators. Our method is not only simple and accurate but also fast enough for various real-time applications such as video games or internet broadcasting.