Weighted pose space editing for facial animation

  • Authors:
  • Yeongho Seol;Jaewoo Seo;Paul Hyunjin Kim;J. P. Lewis;Junyong Noh

  • Affiliations:
  • KAIST, Daejeon, Korea;KAIST, Daejeon, Korea;KAIST, Daejeon, Korea;Weta Digital and Victoria University, Wellington, New Zealand;KAIST, Daejeon, Korea

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Blendshapes are the most commonly used approach to realistic facial animation in production. A blendshape model typically begins with a relatively small number of blendshape targets reflecting major muscles or expressions. However, the majority of the effort in constructing a production quality model occurs in the subsequent addition of targets needed to reproduce various subtle expressions and correct for the effects of various shapes in combination. To make this subsequent modeling process much more efficient, we present a novel editing method that removes the need for much of the iterative trial-and-error decomposition of an expression into targets. Isolated problematic frames of an animation are re-sculpted as desired and used as training for a nonparametric regression that associates these shapes with the underlying blendshape weights. Using this technique, the artist’s correction to a problematic expression is automatically applied to similar expressions in an entire sequence, and indeed to all future sequences. The extent and falloff of editing is controllable and the effect is continuously propagated to all similar expressions. In addition, we present a search scheme that allows effective reuse of pre-sculpted editing examples. Our system greatly reduces time and effort required by animators to create high quality facial animations.