A Theory of Shape by Space Carving
International Journal of Computer Vision - Special issue on Genomic Signal Processing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multi-camera Scene Reconstruction via Graph Cuts
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part III
Stereo Matching Using Belief Propagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Comparison of Graph Cuts with Belief Propagation for Stereo, using Identical MRF Parameters
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Image-based rendering using image-based priors
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Bayesian 3D Modeling from Images Using Multiple Depth Maps
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Combined Depth and Outlier Estimation in Multi-View Stereo
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Convergent Tree-Reweighted Message Passing for Energy Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
Comparison of energy minimization algorithms for highly connected graphs
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Hi-index | 0.00 |
In this paper we propose a method to construct a virtual sequence for a camera moving through a static environment given an input sequence from a different camera trajectory. Existing image-based rendering techniques can generate photorealistic images given a set of input views, though the output images almost unavoidably contain small regions where the colour has been incorrectly chosen. In a single image these artifacts are often hard to spot, but become more obvious when viewing a real image with its virtual stereo pair, and even more so when when a sequence of novel views is generated, since the artifacts are rarely temporally consistent. To address this problem of consistency, we propose a new spatiotemporal approach to novel video synthesis. The pixels in the output video sequence are modelled as nodes of a 3-D graph. We define an MRF on the graph which encodes photoconsistency of pixels as well as texture priors in both space and time. Unlike methods based on scene geometry which yield highly connected graphs, our approach results in a graph whose degree is independent of scene structure. The MRF energy is therefore tractable and we solve it for the whole sequence using a stateof-the-art message passing optimisation algorithm. We demonstrate the effectiveness of our approach in reducing temporal artifacts.