Camera space volumetric shadows

  • Authors:
  • Johannes Hanika;Peter Hillman;Martin Hill;Luca Fascione

  • Affiliations:
  • Weta Digital Ltd;Weta Digital Ltd;Weta Digital Ltd;Weta Digital Ltd

  • Venue:
  • Proceedings of the Digital Production Symposium
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We transform irregularly sampled shadow map data to deep image buffers in camera space, which are then used to create volumetric shadows in a deep compositing workflow. Our technique poses no restrictions on the sample locations of the shadow map and can thus be used with a variety of adaptive approaches to produce more precise shadows closer to the camera. To construct watertight shafts towards the light source forming crepuscular rays, we use a two-dimensional quad tree in light space. This structure is constructed from the shadow samples independent of the camera position, making stereo renders and camera animations for static light sources and geometry more efficient. The actual integration of volumetric light transport is then left to a fast image space deep compositing workflow, enabling short turnaround times for cinematic lighting design. We show a simple scalable ray tracing kernel to convert the quad tree representation to a deep image for each camera, where ray tracing takes only 25% of the processing time.