Robust visual tracking of articulated human motion

  • Authors:
  • Feng Guo

  • Affiliations:
  • Arizona State University

  • Venue:
  • Robust visual tracking of articulated human motion
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This dissertation presents a robust computational framework for visual tracking of articulated human motion. For a certain type of movement, Gaussian process latent variable models are used to obtain low dimensional manifold representations of silhouette images, and body kinematics and movement dynamics. Bayesian mixture of experts (BME) and relevance vector machine (RVM) are then used to establish mappings between silhouette and body kinematics manifolds. Given a monocular video, articulated motion tracking can then be carried out using a particle filter defined over silhouette and kinematics manifolds. This tracking framework is viewindependent, self-initializing, and capable of tracking multiple kinematic trajectories. Plausible movement particles are obtained through a sampling process informed by both learned movement dynamics and the current image observation. The framework is also extended into multiple views to reduce tracking ambiguity. Experimental results using both synthetic and real videos show the efficacy of the proposed framework. A number of other important issues are also addressed in this dissertation. A novel vectorized silhouette representation is developed using Gaussian mixture models and vector quantization. An adaptive particle filter is proposed to improve the tolerance to abrupt dynamics changes in tracking. As two byproducts, a robust image-based people pose recognition system using wide-baseline stereo cameras, and a model-based 3D arm tracking system are also developed and demonstrated using challenging real-life images and videos.