Multidimensional adaptive sampling and reconstruction for ray tracing

  • Authors:
  • Toshiya Hachisuka;Wojciech Jarosz;Richard Peter Weistroffer;Kevin Dale;Greg Humphreys;Matthias Zwicker;Henrik Wann Jensen

  • Affiliations:
  • UC San Diego;UC San Diego;University of Virginia;Harvard University;University of Virginia;UC San Diego;UC San Diego

  • Venue:
  • ACM SIGGRAPH 2008 papers
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noise-free. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mittchell's adaptive sampling technique while producing images with less noise.