Algebraic and geometric reasoning using Dixon resultants
ISSAC '94 Proceedings of the international symposium on Symbolic and algebraic computation
Object Pose: The Link between Weak Perspective,Paraperspective, and Full Perspective
International Journal of Computer Vision
Solving the recognition problem for six lines using the Dixon resultant
Mathematics and Computers in Simulation - Special issue on high performance symbolic computing
Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image-Based Rendering Using Parameterized Image Varieties
International Journal of Computer Vision
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Geometric Context from a Single Image
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Free-viewpoint depth image based rendering
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
This paper presents a novel parameterized variety based view synthesis scheme for 3DTV and multi-view systems. We have generalized the parameterized image variety approach to image based rendering proposed in [1] to handle full perspective cameras. An algebraic geometry framework is proposed for the parameterization of the variety associated with full perspective images, by image positions of three reference scene points. A complete parameterization of the 3D scene is constructed. This allows to generate realistic novel views from arbitrary viewpoints without explicit 3D reconstruction, taking few multi-view images as input from uncalibrated cameras. Another contribution of this paper is to provide a generalised and flexible architecture based on this variety model for multi-view 3DTV. The novelty of the architecture lies in merging this variety based approach with standard depth image based view synthesis pipeline, without explicitly obtaining sparse or dense 3D points. This integrated framework subsequently overcomes the problems associated with existing depth based representations. The key aspects of this joint framework are: 1) Synthesis of artifacts free novel views from arbitrary camera positions for wide angle viewing. 2) Generation of signal representation compatible with standard multi-view systems. 3) Extraction of reliable view dependent depth maps from arbitrary virtual viewpoints without recovering exact 3D points. 4) Intuitive interface for virtual view specification based on scene content. Experimental results on standard multi-view sequences are presented to demonstrate the effectiveness of the proposed scheme.