Animatable facial reflectance fields

  • Authors:
  • Tim Hawkins;Andreas Wenger;Chris Tchou;Andrew Gardner;Fredrik Göransson;Paul Debevec

  • Affiliations:
  • University of Southern California Institute for Creative Technologies;University of Southern California Institute for Creative Technologies;University of Southern California Institute for Creative Technologies;University of Southern California Institute for Creative Technologies;Linköping University Norrköping Visualization and Interaction Studio, Sweden;University of Southern California Institute for Creative Technologies

  • Venue:
  • EGSR'04 Proceedings of the Fifteenth Eurographics conference on Rendering Techniques
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a technique for creating an animatable image-based appearance model of a human face, able to capture appearance variation over changing facial expression, head pose, view direction, and lighting condition. Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject sequentially from many different directions in just a few seconds. For each pose, the subject remains still while six video cameras capture their appearance under each of the directions of lighting. We repeat this process for approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The images for each of the poses and camera views are registered to each other semi-automatically with the help of fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured poses and under any desired lighting condition by warping, scaling, and blending data from the original images. Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear combination of the original captured poses.