Modeling hair from multiple views

  • Authors:
  • Yichen Wei;Eyal Ofek;Long Quan;Heung-Yeung Shum

  • Affiliations:
  • The Hong Kong University of Science and Technology;Microsoft Research Asia;The Hong Kong University of Science and Technology;Microsoft Research Asia

  • Venue:
  • ACM SIGGRAPH 2005 Papers
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a novel image-based approach to model hair geometry from images taken at multiple viewpoints. Unlike previous hair modeling techniques that require intensive user interactions or rely on special capturing setup under controlled illumination conditions, we use a handheld camera to capture hair images under uncontrolled illumination conditions. Our multi-view approach is natural and flexible for capturing. It also provides inherent strong and accurate geometric constraints to recover hair models.In our approach, the hair fibers are synthesized from local image orientations. Each synthesized fiber segment is validated and optimally triangulated from all visible views. The hair volume and the visibility of synthesized fibers can also be reliably estimated from multiple views. Flexibility of acquisition, little user interaction, and high quality results of recovered complex hair models are the key advantages of our method.