Spacetime expression cloning for blendshapes

  • Authors:
  • Yeongho Seol;J.P. Lewis;Jaewoo Seo;Byungkuk Choi;Ken Anjyo;Junyong Noh

  • Affiliations:
  • KAIST and Weta Digital;Weta Digital;KAIST;KAIST;OLM Digital and JST CREST;KAIST

  • Venue:
  • ACM Transactions on Graphics (TOG)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.