Multi-linear data-driven dynamic hair model with efficient hair-body collision handling

  • Authors:
  • Peng Guan;Leonid Sigal;Valeria Reznitskaya;Jessica K. Hodgins

  • Affiliations:
  • Brown University;Disney Research, Pittsburgh;Disney Research, Pittsburgh;Disney Research, Pittsburgh and Carnegie Mellon University

  • Venue:
  • EUROSCA'12 Proceedings of the 11th ACM SIGGRAPH / Eurographics conference on Computer Animation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a data-driven method for learning hair models that enables the creation and animation of many interactive virtual characters in real-time (for gaming, character pre-visualization and design). Our model has a number of properties that make it appealing for interactive applications: (i) it preserves the key dynamic properties of physical simulation at a fraction of the computational cost, (ii) it gives the user continuous interactive control over the hair styles (e.g., lengths) and dynamics (e.g., softness) without requiring re-styling or re-simulation, (iii) it deals with hair-body collisions explicitly using optimization in the low-dimensional reduced space, (iv) it allows modeling of external phenomena (e.g., wind). Our method builds on the recent success of reduced models for clothing and fluid simulation, but extends them in a number of significant ways. We model motion of hair in a conditional reduced sub-space, where the hair basis vectors, which encode dynamics, are linear functions of userspecified hair parameters. We formulate collision handling as an optimization in this reduced sub-space using fast iterative least squares. We demonstrate our method by building dynamic, user-controlled models of hair styles.