Driving high-resolution facial blendshapes with video performance capture

  • Authors:
  • Graham Fyffe;Andrew Jones;Oleg Alexander;Ryosuke Ichikari;Paul Graham;Koki Nagano;Jay Busch;Paul Debevec

  • Affiliations:
  • USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies;USC Institute for Creative Technologies

  • Venue:
  • ACM SIGGRAPH 2013 Talks
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a technique for creating realistic facial animation from a set of high-resolution static scans of an actor's face driven by passive video of the actor from one or more viewpoints. We capture high-resolution static geometry using multi-view stereo and gradient-based photometric stereo [Ghosh et al. 2011]. The scan set includes around 30 expressions largely inspired by the Facial Action Coding System (FACS). Examples of the input scan geometry can be seen in Figure 1 (a). The base topology is defined by an artist for the neutral scan of each subject. The dynamic performance can be shot under existing environmental illumination using one or more off-the shelf HD video cameras.