A framework for locally retargeting and rendering facial performance

  • Authors:
  • Ko-Yun Liu;Wan-Chun Ma;Chun-Fa Chang;Chuan-Chang Wang;Paul Debevec

  • Affiliations:
  • Next Media Animation, Taiwan and Department of Computer Science, National Tsing-Hua University, Taiwan;Institute of Creative Technologies, University of Southern California, CA, USA;Department of Computer Science and Information Engineering, National Taiwan Normal University, Taiwan;Next Media Animation, Taiwan;Institute of Creative Technologies, University of Southern California, CA, USA

  • Venue:
  • Computer Animation and Virtual Worlds
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details. Copyright © 2011 John Wiley & Sons, Ltd.