A novel method for fast and high-quality rendering of hair

  • Authors:
  • Songhua Xu;Francis C. M. Lau;Hao Jiang;Yunhe Pan

  • Affiliations:
  • CAD & CG State Key Lab of China, Zhejiang University, P.R. China and Department of Computer Science, Yale University, New Haven, CT and Department of Computer Science, The University of Hong K ...;Department of Computer Science, The University of Hong Kong, Hong Kong, P.R. China;CAD & CG State Key Lab of China, Zhejiang University, P.R. China and Department of Computer Science, The University of Hong Kong, Hong Kong, P.R. China;CAD & CG State Key Lab of China, Zhejiang University, P.R. China

  • Venue:
  • EGSR'06 Proceedings of the 17th Eurographics conference on Rendering Techniques
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a new rendering approach for hair. The model we use incorporates semantics-related information directly in the appearance modeling function which we call a Semantics-Aware Texture Function (SATF). This new appearance modeling function is well suited for constructing an off-line/on-line hybrid algorithm to achieve fast and high-quality rendering of hair. The off-line phase generates intermediate results in a database for sample geometries under different viewing and lighting conditions, which can be used to complete a large part of the overall computation and leaves only a few dynamic tasks to be performed on-line. We propose a model having four levels, from the whole hair volume to the very fine hair density level. We further employ an efficient disk-like structure to represent hair distributions inside a hair cluster. As the intermediate database carries opacity information, self-shadows can be easily generated. We present experiment results which clearly show that our methodology can indeed produce high quality rendering results efficiently. Supplementary materials and supporting demos can be found in our project website http://www.cs.hku.hk/˜songhua/hair-rendering/.