Learned Models for Estimation of Rigid and ArticulatedHuman Motion from Stationary or Moving Camera

  • Authors:
  • Yaser Yacoob;Larry S. Davis

  • Affiliations:
  • Computer Vision Laboratory Center for Automation Research, University of Maryland, College Park, MD 20742, USA. yaser@umiacs.umd.edu;Computer Vision Laboratory Center for Automation Research, University of Maryland, College Park, MD 20742, USA. lsd@umiacs.umd.edu

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an approach for modeling, measurement andtracking of rigid and articulated motion as viewed from a stationaryor moving camera. We first propose an approach for learningtemporal-flow models from exemplar image sequences. The temporal-flowmodels are represented as a set of orthogonal temporal-flow basesthat are learned using principal component analysis of instantaneousflow measurements. Spatial constraints on the temporal-flow are thenincorporated to model the movement of regions of rigid or articulatedobjects. These spatio-temporal flow models are subsequently used asthe basis for simultaneous measurement and tracking of brightnessmotion in image sequences. Then we address the problem of estimatingcomposite independent object and camera image motions. We employ thespatio-temporal flow models learned through observing typicalmovements of the object from a stationary camera to decompose imagemotion into independent object and camera motions. The performance ofthe algorithms is demonstrated on several long image sequences ofrigid and articulated bodies in motion.