Single photo estimation of hair appearance

  • Authors:
  • Nicolas Bonneel;Sylvain Paris;Michiel van de Panne;Frédo Durand;George Drettakis

  • Affiliations:
  • REVES/INRIA Sophia-Antipolis;Adobe Systems, Inc.;University of British Columbia and REVES/INRIA Sophia-Antipolis;MIT CSAIL;REVES/INRIA Sophia-Antipolis

  • Venue:
  • EGSR'09 Proceedings of the Twentieth Eurographics conference on Rendering
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Significant progress has been made in high-quality hair rendering, but it remains difficult to choose parameter values that reproduce a given real hair appearance. In particular, for applications such as games where naive users want to create their own avatars, tuning complex parameters is not practical. Our approach analyses a single flash photograph and estimates model parameters that reproduce the visual likeness of the observed hair. The estimated parameters include color absorptions, three reflectance lobe parameters of a multiple-scattering rendering model, and a geometric noise parameter. We use a novel melanin-based model to capture the natural subspace of hair absorption parameters. At its core, the method assumes that images of hair with similar color distributions are also similar in appearance. This allows us to recast the issue as an image retrieval problem where the photo is matched with a dataset of rendered images; we thus also match the model parameters used to generate these images. An earth-mover's distance is used between luminance-weighted color distributions to gauge similarity. We conduct a perceptual experiment to evaluate this metric in the context of hair appearance and demonstrate the method on 64 photographs, showing that it can achieve a visual likeness for a large variety of input photos.