Learning Layered Motion Segmentations of Video

  • Authors:
  • M. Pawan Kumar;P. H. Torr;A. Zisserman

  • Affiliations:
  • Department of Computing, Oxford Brookes University, Oxford, UK;Department of Computing, Oxford Brookes University, Oxford, UK;Department of Eng. Science, University of Oxford, Oxford, UK

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an unsupervised approach for learning a layered representation of a scene from a video for motion segmentation. Our method is applicable to any video containing piecewise parametric motion. The learnt model is a composition of layers, which consist of one or more segments. The shape of each segment is represented using a binary matte and its appearance is given by the rgb value for each point belonging to the matte. Included in the model are the effects of image projection, lighting, and motion blur. Furthermore, spatial continuity is explicitly modeled resulting in contiguous segments. Unlike previous approaches, our method does not use reference frame(s) for initialization. The two main contributions of our method are: (i) A novel algorithm for obtaining the initial estimate of the model by dividing the scene into rigidly moving components using efficient loopy belief propagation; and (ii) Refining the initial estimate using 驴 β-swap and 驴-expansion algorithms, which guarantee a strong local minima. Results are presented on several classes of objects with different types of camera motion, e.g. videos of a human walking shot with static or translating cameras. We compare our method with the state of the art and demonstrate significant improvements.