Optical Flow Estimation and Segmentation of Multiple Moving Dynamic Textures

  • Authors:
  • Rene Vidal;Avinash Ravichandran

  • Affiliations:
  • Johns Hopkins University;Johns Hopkins University

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of modeling a scene containing multiple dynamic textures undergoing multiple rigid-body motions, e.g., a video sequence of water taken by a rigidly moving camera. We propose to model each moving dynamic texture with a time varying linear dynamical system (LDS) plus a 2-D translational motion model. We first consider a scene with a single moving dynamic texture and show how to simultaneously learn the parameters of the time varying LDS as well as the optical flow of the scene using the so-called dynamic texture constancy constraint (DTCC). We then consider a scene with multiple non-moving dynamic textures and show that learning the parameters of each time invariant LDS as well as their region of support is equivalent to clustering data living in multiple subspaces. We solve this problem with a combination of PCA and GPCA. Finally, we consider a scene with multiple moving dynamic textures, and show how to simultaneously learn the parameters of multiple time varying LDS and multiple 2-D translational models, by clustering data living in multiple dynamically evolving subspaces. We test our approach on sequences of flowers, water, grass, and a beating heart.