Compact and efficient generation of radiance transfer for dynamically articulated characters

  • Authors:
  • Derek Nowrouzezahrai;Patricio Simari;Evangelos Kalogerakis;Karan Singh;Eugene Fiume

  • Affiliations:
  • University of Toronto;University of Toronto;University of Toronto;University of Toronto;University of Toronto

  • Venue:
  • Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a data-driven technique for generating the precomputed radiance transfer vectors of an animated character as a function of its joint angles. We learn a linear model for generating real-time lighting effects on articulated characters while capturing soft self-shadows caused by dynamic distant lighting. Indirect illumination can also be reproduced using our framework. Previous data-driven techniques have either restricted the type of lighting response (generating only ambient occlusion), the type of animated sequences (response functions to external forces) or have complicated runtime algorithms and incur non-trivial memory costs. We provide insights into the dimensionality reduction of the pose and coefficient spaces. Our model can be fit quickly as a preprocess, is very compact (~1 MB) and runtime transfer vectors are generated using a simple algorithm in real-time ( 100 Hz using a CPU-only implementation.) We can reproduce lighting effects on hundreds of trained poses using less memory than required to store a single mesh's PRT coefficients. Moreover, our model extrapolates to produce smooth, believable lighting results on novel poses and our method can be easily integrated into existing interactive content pipelines.