Image-based spatio-temporal modeling and view interpolation of dynamic events

  • Authors:
  • Sundar Vedula;Simon Baker;Takeo Kanade

  • Affiliations:
  • YesVideo, Inc., Santa Clara, CA;Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • ACM Transactions on Graphics (TOG)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an approach for modeling and rendering a dynamic, real-world event from an arbitrary viewpoint, and at any time, using images captured from multiple video cameras. The event is modeled as a nonrigidly varying dynamic scene, captured by many images from different viewpoints, at discrete times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. The view synthesis problem is then solved using a reverse mapping algorithm, ray-casting across space and time, to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such recreated renderings can be put together to create retimed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or switching between images from multiple cameras.