Data-driven facial expression synthesis via Laplacian deformation

  • Authors:
  • Xianmei Wan;Xiaogang Jin

  • Affiliations:
  • State Key Lab of CAD & CG, Zhejiang University, Hangzhou, People's Republic of China 310027 and Zhejiang University of Finance & Economics, Hangzhou, People's Republic of China 310018;State Key Lab of CAD & CG, Zhejiang University, Hangzhou, People's Republic of China 310027

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Realistic talking heads have important use in interactive multimedia applications. This paper presents a novel framework to synthesize realistic facial animations driven by motion capture data using Laplacian deformation. We first capture the facial expression from a performer, then decompose the motion data into two components: the rigid movement of the head and the change of the facial expression. By making use of the local-detail preserving property of the Laplacian coordinate, we clone the captured facial expression onto a neutral 3D facial model using Laplacian deformation. We choose some expression "independent points" in the facial model as the fixed points when solving the Laplacian deformation equations. Experimental results show that our approach can synthesize realistic facial expressions in real time while preserving the facial details. We compare our method with the state-of-the-art facial expression synthesis methods to verify the advantages of our method. Our approach can be applied in real-time multimedia systems.