Multi-camera tracking of articulated human motion using motion and shape cues

  • Authors:
  • Aravind Sundaresan;Rama Chellappa

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Maryland, College Park, MD;Department of Electrical and Computer Engineering, University of Maryland, College Park, MD

  • Venue:
  • ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a framework and algorithm for tracking articulated motion for humans. We use multiple calibrated cameras and an articulated human shape model. Tracking is performed using motion cues as well as image-based cues (such as silhouettes and “motion residues” hereafter referred to as spatial cues,) as opposed to constructing a 3D volume image or visual hulls. Our algorithm consists of a predictor and corrector: the predictor estimates the pose at the t + 1 using motion information between images at t and t + 1. The error in the estimated pose is then corrected using spatial cues from images at t + 1. In our predictor, we use robust multi-scale parametric optimisation to estimate the pixel displacement for each body segment. We then use an iterative procedure to estimate the change in pose from the pixel displacement of points on the individual body segments. We present a method for fusing information from different spatial cues such as silhouettes and “motion residues” into a single energy function. We then express this energy function in terms of the pose parameters, and find the optimum pose for which the energy is minimised.