Animating blendshape faces by cross-mapping motion capture data

  • Authors:
  • Zhigang Deng;Pei-Ying Chiang;Pamela Fox;Ulrich Neumann

  • Affiliations:
  • USC;USC;USC;USC

  • Venue:
  • I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Animating 3D faces to achieve compelling realism is a challenging task in the entertainment industry. Previously proposed face transfer approaches generally require a high-quality animated source face in order to transfer its motion to new 3D faces. In this work, we present a semi-automatic technique to directly animate popularized 3D blendshape face models by mapping facial motion capture data spaces to 3D blendshape face spaces. After sparse markers on the face of a human subject are captured by motion capture systems while a video camera is simultaneously used to record his/her front face, then we carefully select a few motion capture frames and accompanying video frames as reference mocap-video pairs. Users manually tune blendshape weights to perceptually match the animated blendshape face models with reference facial images (the reference mocap-video pairs) in order to create reference mocap-weight pairs. Finally, the Radial Basis Function (RBF) regression technique is used to map any new facial motion capture frame to blendshape weights based on the reference mocap-weight pairs. Our results demonstrate that this technique is efficient to animate blendshape face models, while offering its generality and flexiblity.