Example-based performance driven facial shape animation

  • Authors:
  • Yang Yang;Nanning Zheng;Yuehu Liu;Shaoyi Du;Yoshifumi Nishio

  • Affiliations:
  • Xi'an Jiaotong University, Xi'an, China and The University of Tokushima, Tokushima, Japan;Xi'an Jiaotong University, Xi'an, China;Xi'an Jiaotong University, Xi'an, China;Xi'an Jiaotong University, Xi'an, China;The University of Tokushima, Tokushima, Japan

  • Venue:
  • ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel performance driven facial shape animation method is presented for mapping the expressions from the source face to the target face automatically. Unlike the prior expression cloning approaches, the proposed method aims to animate a new target face with the help of real facial expression samples. The basic idea is to learn the shape deformation from samples for target face to generate corresponding expressions. The process consists of two main stages. First of all, source motion vectors are transferred by statistic face model to generate a reasonable expression on the target face. And then, local deformation constraints are proposed to refine the animation results. In the second part, the local deformation characters for each target organ are learned from the samples, which preserve the personality as well as the expression styles. Experimental results on different facial animation demonstrate the feasibility and effectiveness of the proposed method.