SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Image-based rendering: A new interface between computer vision and computer graphics
ACM SIGGRAPH Computer Graphics
Controlled animation of video sprites
Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation
Texture Mixing and Texture Movie Synthesis Using Statistical Learning
IEEE Transactions on Visualization and Computer Graphics
Graphcut textures: image and video synthesis using graph cuts
ACM SIGGRAPH 2003 Papers
ACM SIGGRAPH 2004 Papers
High-quality video view interpolation using a layered representation
ACM SIGGRAPH 2004 Papers
ACM SIGGRAPH 2005 Papers
Video-Based Rendering
Image-Based Interactive Exploration of Real-World Environments
IEEE Computer Graphics and Applications
Stereoscopic 3D from 2D video with super-resolution capability
Image Communication
Unstructured video-based rendering: interactive exploration of casually captured videos
ACM SIGGRAPH 2010 papers
Relighting human locomotion with flowed reflectance fields
EGSR'06 Proceedings of the 17th Eurographics conference on Rendering Techniques
Hi-index | 0.00 |
We propose an approach to interactively explore video textures from different viewpoints. Scenes can be played back continuously and in a temporally coherent fashion from any camera location along a path. Our algorithm takes as input short videos from a set of discrete camera locations, and does not require contemporaneous capture -- data is acquired by moving a single camera. We analyze this data to find optimal transitions within each video (equivalent to video textures) and to find good transition points between spatially distinct videos. We propose a spatio-temporal view synthesis approach that dynamically creates intermediate frames to maintain temporal coherence. We demonstrate our approach on a variety of scenes with stochastic or repetitive motions, and we analyze the limits of our approach and failure-case artifacts.