Motion-Based View-Invariant Articulated Motion Detection and Pose Estimation Using Sparse Point Features

  • Authors:
  • Shrinivas J. Pundlik;Stanley T. Birchfield

  • Affiliations:
  • Clemson University, Clemson, USA;Clemson University, Clemson, USA

  • Venue:
  • ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an approach for articulated motion detection and pose estimation that uses only motion information. To estimate the pose and viewpoint we introduce a novel motion descriptor that computes the spatial relationships of motion vectors representing various parts of the person using the trajectories of a number of sparse points. A nearest neighbor search for the closest motion descriptor from the labeled training data of human walking poses in multiple views is performed. This observational probability is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. Experimental results on various sequences of walking subjects with multiple viewpoints demonstrate the effectiveness of the approach. In particular, our purely motion-based approach is able to track people even when other visible cues are not available, such as in low-light situations.