Video matting of complex scenes

  • Authors:
  • Yung-Yu Chuang;Aseem Agarwala;Brian Curless;David H. Salesin;Richard Szeliski

  • Affiliations:
  • University of Washington;University of Washington;University of Washington;University of Washington and Microsoft Research;Microsoft Research

  • Venue:
  • Proceedings of the 29th annual conference on Computer graphics and interactive techniques
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a new framework for video matting, the process of pulling a high-quality alpha matte and foreground from a video sequence. The framework builds upon techniques in natural image matting, optical flow computation, and background estimation. User interaction is comprised of garbage matte specification if background estimation is needed, and hand-drawn keyframe segmentations into "foreground," "background" and "unknown". The segmentations, called trimaps, are interpolated across the video volume using forward and backward optical flow. Competing flow estimates are combined based on information about where flow is likely to be accurate. A Bayesian matting technique uses the flowed trimaps to yield high-quality mattes of moving foreground elements with complex boundaries filmed by a moving camera. A novel technique for smoke matte extraction is also demonstrated.