A New Sense for Depth of Field
IEEE Transactions on Pattern Analysis and Machine Intelligence
Depth from defocus: a spatial domain approach
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recovering high dynamic range radiance maps from photographs
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rational Filters for Passive Depth from Defocus
International Journal of Computer Vision
Dynamically reparameterized light fields
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Depth from Defocus vs. Stereo: How Different Really Are They?
International Journal of Computer Vision - Special issue on computer vision research at the Technion
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part II
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
Synthetic aperture confocal imaging
ACM SIGGRAPH 2004 Papers
Image-Based Rendering Using Image-Based Priors
International Journal of Computer Vision
Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modeling hair from multiple views
ACM SIGGRAPH 2005 Papers
3D shape from anisotropic diffusion
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
ACM SIGGRAPH 2007 papers
Image and depth from a conventional camera with a coded aperture
ACM SIGGRAPH 2007 papers
Hair photobooth: geometric and photometric acquisition of real hairstyles
ACM SIGGRAPH 2008 papers
Estimating the 3D direction of a translating camera from a single motion-blurred image
Pattern Recognition Letters
Evolving measurement regions for depth from defocus
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
Highlighted depth-of-field photography: Shining light on focus
ACM Transactions on Graphics (TOG)
A Combined Theory of Defocused Illumination and Global Light Transport
International Journal of Computer Vision
Computational plenoptic imaging
ACM SIGGRAPH 2012 Courses
Modeling and synthesis of aperture effects in cameras
Computational Aesthetics'08 Proceedings of the Fourth Eurographics conference on Computational Aesthetics in Graphics, Visualization and Imaging
Hi-index | 0.00 |
We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible in-focus scene point will vary in a scene-independent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). This allows us to assemble an A × F aperture-focus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes.