Temporal priors for novel video synthesis

  • Authors:
  • Ali Shahrokni;Oliver Woodford;Ian Reid

  • Affiliations:
  • Robotics Reseach Laboratory, University of Oxford, Oxford, UK;Robotics Reseach Laboratory, University of Oxford, Oxford, UK;Robotics Reseach Laboratory, University of Oxford, Oxford, UK

  • Venue:
  • ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part II
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose a method to construct a virtual sequence for a camera moving through a static environment given an input sequence from a different camera trajectory. Existing image-based rendering techniques can generate photorealistic images given a set of input views, though the output images almost unavoidably contain small regions where the colour has been incorrectly chosen. In a single image these artifacts are often hard to spot, but become more obvious when viewing a real image with its virtual stereo pair, and even more so when when a sequence of novel views is generated, since the artifacts are rarely temporally consistent. To address this problem of consistency, we propose a new spatiotemporal approach to novel video synthesis. The pixels in the output video sequence are modelled as nodes of a 3-D graph. We define an MRF on the graph which encodes photoconsistency of pixels as well as texture priors in both space and time. Unlike methods based on scene geometry which yield highly connected graphs, our approach results in a graph whose degree is independent of scene structure. The MRF energy is therefore tractable and we solve it for the whole sequence using a stateof-the-art message passing optimisation algorithm. We demonstrate the effectiveness of our approach in reducing temporal artifacts.