An Image-Based Approach to Three-Dimensional Computer Graphics
An Image-Based Approach to Three-Dimensional Computer Graphics
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
View generation with 3D warping using depth information for FTV
Image Communication
Error analysis for image-based rendering with depth information
IEEE Transactions on Image Processing
PCS'09 Proceedings of the 27th conference on Picture Coding Symposium
3DAV exploration of video-based rendering technology in MPEG
IEEE Transactions on Circuits and Systems for Video Technology
A fast method for global depth-map extraction from natural images
Proceedings of the 9th European Conference on Visual Media Production
3DTV view generation with virtual pan/tilt/zoom functionality
Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing
Parameterized variety based view synthesis scheme for multi-view 3DTV
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part IV
View-spatial-temporal post-refinement for view synthesis in 3D video systems
Image Communication
A DASH-based Free Viewpoint Video Streaming System
Proceedings of Network and Operating System Support on Digital Audio and Video Workshop
A flexible architecture for multi-view 3DTV based on uncalibrated cameras
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
In 3D TV research, one approach is to employ multiple cameras for creating a 3D multi-view signal with the aim to make interactive free-viewpoint selection possible in 3D TV media. This paper explores a new rendering algorithm that enables to compute a free-viewpoint between two reference views from existing cameras. A unique property is that we perform forward warping for both texture and depth simultaneously. Advantages of our rendering are manyfold. First, resampling artifacts are filled in by inverse warping. Second, disocclusions are processed while omitting warping of edges at high discontinuities. Third, our disocclusion inpainting approach explicitly uses depth information. We obtain an average PSNR gain of 3dB and 4.5dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared recently published results. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by rendering quality and not by coding.