Robust Algebraic Segmentation of Mixed Rigid-Body and Planar Motions from Two Views

  • Authors:
  • Shankar R. Rao;Allen Y. Yang;S. Shankar Sastry;Yi Ma

  • Affiliations:
  • Department of ECE, University of Illinois at Urbana-Champaign, Coordinate Science Laboratory, Urbana, USA 61801 and HRL Laboratories, LLC, Malibu, USA 90265;Department of EECS, University of California, Berkeley, USA 94720;Department of EECS, University of California, Berkeley, USA 94720;Department of ECE, University of Illinois at Urbana-Champaign, Coordinate Science Laboratory, Urbana, USA 61801 and Visual Computing Group, Microsoft Research Asia, Beijing, China

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper studies segmentation of multiple rigid-body motions in a 3-D dynamic scene under perspective camera projection. We consider dynamic scenes that contain both 3-D rigid-body structures and 2-D planar structures. Based on the well-known epipolar and homography constraints between two views, we propose a hybrid perspective constraint (HPC) to unify the representation of rigid-body and planar motions. Given a mixture of K hybrid perspective constraints, we propose an algebraic process to partition image correspondences to the individual 3-D motions, called Robust Algebraic Segmentation (RAS). Particularly, we prove that the joint distribution of image correspondences is uniquely determined by a set of (2K)-th degree polynomials, a global signature for the union of K motions of possibly mixed type. The first and second derivatives of these polynomials provide a means to recover the association of the individual image samples to their respective motions. Finally, using robust statistics, we show that the polynomials can be robustly estimated in the presence of moderate image noise and outliers. We conduct extensive simulations and real experiments to validate the performance of the new algorithm. The results demonstrate that RAS achieves notably higher accuracy than most existing robust motion-segmentation methods, including random sample consensus (RANSAC) and its variations. The implementation of the algorithm is also two to three times faster than the existing methods. The implementation of the algorithm and the benchmark scripts are available at http://perception.csl.illinois.edu/ras/ .