SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
ACM SIGGRAPH 2005 Papers
A frequency analysis of light transport
ACM SIGGRAPH 2005 Papers
Image and depth from a conventional camera with a coded aperture
ACM SIGGRAPH 2007 papers
Flexible Depth of Field Photography
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
4D frequency analysis of computational cameras for depth of field extension
ACM SIGGRAPH 2009 papers
Single image blind deconvolution with higher-order texture statistics
Proceedings of the 2010 international conference on Video Processing and Computational Video
Computational plenoptic imaging
ACM SIGGRAPH 2012 Courses
Near-invariant blur for depth and 2D motion via time-varying light field analysis
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
In recent years, several cameras have been introduced which extend depth of field (DOF) by producing a depth-invariant point spread function (PSF). These cameras extend DOF by deblurring a captured image with a single spatially-invariant PSF. For these cameras, the quality of recovered images depends both on the magnitude of the PSF spectrum (MTF) of the camera, and the similarity between PSFs at different depths. While researchers have compared the MTFs of different extended DOF cameras, relatively little attention has been paid to evaluating their depth invariances. In this paper, we compare the depth invariance of several cameras, and introduce a new camera that improves in this regard over existing designs, while still maintaining a good MTF. Our technique utilizes a novel optical element placed in the pupil plane of an imaging system. Whereas previous approaches use optical elements characterized by their amplitude or phase profile, our approach utilizes one whose behavior is characterized by its scattering properties. Such an element is commonly referred to as an optical diffuser, and thus we refer to our new approach as diffusion coding. We show that diffusion coding can be analyzed in a simple and intuitive way by modeling the effect of a diffuser as a kernel in light field space. We provide detailed analysis of diffusion coded cameras and show results from an implementation using a custom designed diffuser.