International Journal of Computer Vision
Rational Filters for Passive Depth from Defocus
International Journal of Computer Vision
Depth from Defocus vs. Stereo: How Different Really Are They?
International Journal of Computer Vision - Special issue on computer vision research at the Technion
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Synthetic aperture confocal imaging
ACM SIGGRAPH 2004 Papers
ACM SIGGRAPH 2005 Papers
Acquisition of time-varying participating media
ACM SIGGRAPH 2005 Papers
Structured Light in Scattering Media
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
A Theory of Inverse Light Transport
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Projection defocus analysis for scene capture and image display
ACM SIGGRAPH 2006 Papers
Fast separation of direct and global components of a scene using high frequency illumination
ACM SIGGRAPH 2006 Papers
Acquiring scattering properties of participating media by dilution
ACM SIGGRAPH 2006 Papers
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Image and depth from a conventional camera with a coded aperture
ACM SIGGRAPH 2007 papers
A Theory of Refractive and Specular 3D Shape by Light-Path Triangulation
International Journal of Computer Vision
Fluorescent immersion range scanning
ACM SIGGRAPH 2008 papers
Time-resolved 3d capture of non-stationary gas flows
ACM SIGGRAPH Asia 2008 papers
Compressive Structured Light for Recovering Inhomogeneous Participating Media
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Optical computing for fast light transport analysis
ACM SIGGRAPH Asia 2010 papers
Shape from second-bounce of light transport
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
A dual theory of inverse and forward light transport
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Combining confocal imaging and descattering
EGSR'08 Proceedings of the Nineteenth Eurographics conference on Rendering
Symmetric photography: exploiting data-sparseness in reflectance fields
EGSR'06 Proceedings of the 17th Eurographics conference on Rendering Techniques
Perceptual radiometric compensation for inter-reflection in immersive projection environment
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Hi-index | 0.00 |
Projectors are increasingly being used as light-sources in computer vision applications. In several applications, they are modeled as point light sources, thus ignoring the effects of illumination defocus. In addition, most active vision techniques assume that a scene point is illuminated only directly by the light source, thus ignoring global light transport effects. Since both defocus and global illumination co-occur in virtually all scenes illuminated by projectors, ignoring them can result in strong, systematic biases in the recovered scene properties. To make computer vision techniques work for general real world scenes, it is thus important to account for both these effects.In this paper, we study the interplay between defocused illumination and global light transport. We show that both these seemingly disparate effects can be expressed as low pass filters on the incident illumination. Using this observation, we derive an invariant between the two effects, which can be used to separate the two. This is directly useful in scenarios where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show applications in two scenarios: (a) accurate depth recovery in the presence of global light transport, and (b) factoring out the effects of illumination defocus for correct direct-global component separation. We demonstrate our approach using scenes with complex shapes, reflectance properties, textures and translucencies.