Approximate depth of field effects using few samples per pixel

  • Authors:
  • Kefei Lei;John F. Hughes

  • Affiliations:
  • Brown University;Brown University

  • Venue:
  • Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a method for rendering depth of field (DoF) effects in a ray-tracing based rendering pipeline using very few samples (typically two or three) per pixel, with the ability to refocus at arbitrary depths at a given view point without gathering more samples. To do so, we treat each sample as a proxy for possible nearby samples and calculate its contributions to the final image with a splat-and-gather scheme. The radiance for each pixel in the output image is then obtained via compositing all contributing samples. We optimize the pipeline using mipmap-like techniques so that the running time is independent of the amount of focal blur in the image. Our method approximates the underlying physical image formation process and thus avoids many of the artifacts of other approximation algorithms. With very low budget it provides satisfactory DoF rendering for most purposes, and a quick preview of DoF effects for applications demanding high rendering quality.