SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Photorealistic Scene Reconstruction by Voxel Coloring
International Journal of Computer Vision
Dynamically reparameterized light fields
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Unstructured lumigraph rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
A real-time distributed light field camera
EGRW '02 Proceedings of the 13th Eurographics workshop on Rendering
International Journal of Computer Vision
Augmented Reality Camera Tracking with Homographies
IEEE Computer Graphics and Applications
Pose Estimation for Planar Structures
IEEE Computer Graphics and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
When Is the Shape of a Scene Unique Given Its Light-Field: A Fundamental Theorem of 3D Vision?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Free Form Light Field Rendering
VMV '01 Proceedings of the Vision Modeling and Visualization Conference 2001
Plenoptic Modeling and Rendering from Image Sequences Taken by Hand-Held Camera
Mustererkennung 1999, 21. DAGM-Symposium
PG '02 Proceedings of the 10th Pacific Conference on Computer Graphics and Applications
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
Hi-index | 0.00 |
It is known that the pure light field approach for view synthesis relies on a large number of image samples to produce anti-aliased renderings. Otherwise, the insufficiency of image sampling needs to be compensated for by geometry sampling. Currently, geometry estimation is done either offline or using dedicated hardware. Our solution to this dilemma is based on three key ideas: a formal analysis of the equivalency between light field rendering and plane-based warping, multi focus imaging in a multi camera system by plane sweeping, and the fusion of the multi focus image using multi view stereo. The essence of our method is to perform necessary depth estimation up to the level required by the minimal joint image-geometry sampling rate using off-the-shelf graphics hardware. As a result, real-time anti-aliased light field rendering is achieved even if the image samples are insufficient.