Refilming with Depth-Inferred Videos

  • Authors:
  • Guofeng Zhang;Zilong Dong;Jiaya Jia;Liang Wan;Tien-Tsin Wong;Hujun Bao

  • Affiliations:
  • Zhejiang University, Hangzhou;Zhejiang University, Hangzhou;The Chinese University of Hong Kong, Hong Kong;The Chinese University of Hong Kong, Hong Kong;The Chinese University of Hong Kong, Hong Kong;Zhejiang University, Hangzhou

  • Venue:
  • IEEE Transactions on Visualization and Computer Graphics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Compared to still image editing, content-based video editing faces the additional challenges of maintaining the spatiotemporal consistency with respect to geometry. This brings up difficulties of seamlessly modifying video content, for instance, inserting or removing an object. In this paper, we present a new video editing system for creating spatiotemporally consistent and visually appealing refilming effects. Unlike the typical filming practice, our system requires no labor-intensive construction of 3D models/surfaces mimicking the real scene. Instead, it is based on an unsupervised inference of view-dependent depth maps for all video frames. We provide interactive tools requiring only a small amount of user input to perform elementary video content editing, such as separating video layers, completing background scene, and extracting moving objects. These tools can be utilized to produce a variety of visual effects in our system, including but not limited to video composition, "predator” effect, bullet-time, depth-of-field, and fog synthesis. Some of the effects can be achieved in real time.