Dynamic hair manipulation in images and videos

  • Authors:
  • Menglei Chai;Lvdi Wang;Yanlin Weng;Xiaogang Jin;Kun Zhou

  • Affiliations:
  • Zhejiang University;Microsoft Research Asia;Zhejiang University;Zhejiang University;Zhejiang University

  • Venue:
  • ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.