View interpolation for image synthesis
SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
A volumetric method for building complex models from range images
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Object shape and reflectance modeling from observation
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Rendering with concentric mosaics
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Automatic Selection of Reference Views for Image-Based Scene Representations
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Euclidean 3D Reconstruction from Image Sequences with Variable Focal Lenghts
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Rendering real-world objects using view interpolation
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Parameterized Image Varieties: A Novel Approach to the Analysis and Synthesis of Image Sequences
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Stereo Reconstruction from Multiperspective Panoramas
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
The goal of most image based rendering systems can be stated as follows: given a set of pictures taken from various vantage points, synthesize the image that would be obtained from a novel viewpoint. In this paper we present a novel approach to view synthesis which hinges on the observation that human viewers tend to be quite sensitive to the motion of features in the image corresponding to intensity discontinuities or edges. Our system focuses its efforts on recovering the 3D position of these features so that their motions can be synthesized correctly. In the current implementation these feature points are recovered from image sequences by employing the epipolar plane image (EPI) analysis techniques proposed by Bolles, Baker, and Marimont. The output of this procedure resembles the output of an edge extraction system where the edgels are augmented with accurate depth information. This method has the advantage of producing accurate depth estimates for most of the salient features in the scene including those corresponding to occluding contours. We will demonstrate that it is possible to produce compelling novel views based on this information.The paper will also describe a principled approach to reasoning about the 3D structure of the scene based on the quasi-sparse features returned by the EPI analysis. This analysis allows us to correctly reproduce occlusion and disocclusion effects in the synthetic views without requiring dense correspondences. Importantly, the technique could also be used to analyze and refine the 3-D results returned by range finders, stereo systems or structure from motion algorithms. Results obtained by applying the proposed techniques to actual image data sets are presented.