Depth of field postprocessing for layered scenes using constant-time rectangle spreading

  • Authors:
  • Todd J. Kosloff;Michael W. Tao;Brian A. Barsky

  • Affiliations:
  • University of California, Berkeley, CA;University of California, Berkeley, CA;University of California, Berkeley, CA

  • Venue:
  • Proceedings of Graphics Interface 2009
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an unnatural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spreading which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF.