High fidelity facial hair capture

  • Authors:
  • Graham Fyffe

  • Affiliations:
  • USC Institute for Creative Technologies

  • Venue:
  • ACM SIGGRAPH 2012 Talks
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modeling human hair from photographs is a topic of ongoing interest to the graphics community. Yet, the literature is predominantly concerned with the hair volume on the scalp, and it remains difficult to capture digital characters with interesting facial hair. Recent stereo-vision-based facial capture systems (e.g. [Furukawa and Ponce 2010][Beeler et al. 2010]) are capable of capturing extremely fine facial detail from high resolution photographs, but any facial hair present on the subject is reconstructed as a blobby mass. Prior work in facial hair photo-modeling is based on learned priors and image cues [Herrera et al.], and does not reconstruct the individual hairs belonging uniquely to the subject. We propose a method for capturing the three dimensional shape of complex, multi-colored facial hair from a small number of photographs taken simultaneously under uniform illumination. The method produces a set of oriented hair particles, suitable for point-based rendering.