Example-based facial rigging

  • Authors:
  • Hao Li;Thibaut Weise;Mark Pauly

  • Affiliations:
  • ETH Zurich/EPFL;EPFL;EPFL

  • Venue:
  • ACM SIGGRAPH 2010 papers
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an optimal reproduction of the training poses. This enables a scalable design process, where the user can iteratively add more training poses to refine the blendshape expression space. However, plausible animations can be obtained even with a single training pose. We show how formulating the optimization in gradient space yields superior results as compared to a direct optimization on blendshape vertices. We provide examples for both hand-crafted characters and 3D scans of a real actor and demonstrate the performance of our system in the context of markerless art-directable facial tracking.