Optimized local blendshape mapping for facial motion retargeting

  • Authors:
  • Wan-Chun Ma;Graham Fyffe;Paul Debevec

  • Affiliations:
  • USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies

  • Venue:
  • ACM SIGGRAPH 2011 Talks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the popular methods for facial motion retargeting is local blendshape mapping [Pighin and Lewis 2006], where each local facial region is controlled by a tracked feature (for example, a vertex in motion capture data). To map a target motion input onto blendshapes, a pose set is chosen for each facial region with minimal retargeting error. However, since the best pose set for each region is chosen independently, the solution likely has unorganized pose sets across the face regions, as shown in Figure 1(b). Therefore, even though every pose set matches the local features, the retargeting result is not guaranteed to be spatially smooth. In addition, previous methods ignored temporal coherence which is key for jitter-free results.