Single Lens Stereo with a Plenoptic Camera
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
ACM SIGGRAPH 2005 Papers
Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Programmable aperture photography: multiplexed light field acquisition
ACM SIGGRAPH 2008 papers
Understanding Camera Trade-Offs through a Bayesian Analysis of Light Field Projections
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
A Superresolution Framework for High-Accuracy Multiview Reconstruction
Proceedings of the 31st DAGM Symposium on Pattern Recognition
Spatio-angular resolution tradeoffs in integral photography
EGSR'06 Proceedings of the 17th Eurographics conference on Rendering Techniques
Generating EPI representations of 4D Light fields with a single lens focused plenoptic camera
ISVC'11 Proceedings of the 7th international conference on Advances in visual computing - Volume Part I
Spatial and angular variational super-resolution of 4d light fields
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Scene reconstruction from high spatio-angular resolution light fields
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
In this paper we show how to obtain full-resolution depth maps from a single image obtained from a plenoptic camera. Previous work showed that the estimation of a low-resolution depth map with a plenoptic camera differs substantially from that of a camera array and, in particular, requires appropriate depth-varying antialiasing filtering. In this paper we show a quite striking result: One can instead recover a depth map at the same full-resolution of the input data. We propose a novel algorithm which exploits a photoconsistency constraint specific to light fields captured with plenoptic cameras. Key to our approach is handling missing data in the photoconsistency constraint and the introduction of novel boundary conditions that impose texture consistency in the reconstructed full-resolution images. These ideas are combined with an efficient regularization scheme to give depth maps at a higher resolution than in any previous method. We provide results on both synthetic and real data.