Using photographs to enhance videos of a static scene

  • Authors:
  • Pravin Bhat;C. Lawrence Zitnick;Noah Snavely;Aseem Agarwala;Maneesh Agrawala;Michael Cohen;Brian Curless;Sing Bing Kang

  • Affiliations:
  • University of Washington;Microsoft Research;University of Washington;Adobe Systems;University of California, Berkeley;University of Washington and Microsoft Research;University of Washington;Microsoft Research

  • Venue:
  • EGSR'07 Proceedings of the 18th Eurographics conference on Rendering Techniques
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a framework for automatically enhancing videos of a static scene using a few photographs of the same scene. For example, our system can transfer photographic qualities such as high resolution, high dynamic range and better lighting from the photographs to the video. Additionally, the user can quickly modify the video by editing only a few still images of the scene. Finally, our system allows a user to remove unwanted objects and camera shake from the video. These capabilities are enabled by two technical contributions presented in this paper. First, we make several improvements to a state-of-the-art multiview stereo algorithm in order to compute view-dependent depths using video, photographs, and structure-from-motion data. Second, we present a novel image-based rendering algorithm that can re-render the input video using the appearance of the photographs while preserving certain temporal dynamics such as specularities and dynamic scene lighting.