Human motion estimation from a reduced marker set

  • Authors:
  • Guodong Liu;Jingdan Zhang;Wei Wang;Leonard McMillan

  • Affiliations:
  • University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill;University of North Carolina at Chapel Hill

  • Venue:
  • I3D '06 Proceedings of the 2006 symposium on Interactive 3D graphics and games
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motion capture data from human subjects exhibits considerable redundancy. In this paper, we propose novel methods for exploiting this redundancy. In particular, we set out to find a subset of motion-capture markers that are able to provide fast and high-quality predictions of the remaining markers. We then develop a model that uses this reduced marker set to predict the others. We demonstrate that this subset of original markers is sufficient to capture subtle variations in human motion.We take a data-driven modeling approach to learn piecewise local linear models from a marker-based training set. We first divide motion sequences into segments of low dimensionality. We then retrieve a feature vector from each of the motion segments and use these feature vectors as modeling primitives to cluster the segments into a hierarchy of local linear models via a divisive clustering method. The selection of an appropriate linear model for reconstruction of a full-body pose is determined automatically via a classifier driven by a reduced marker set. After offline training, our method can quickly reconstruct full-body human motion using a reduced marker set without storing and searching the large database. We also demonstrate our method's ability to generalize over a variety of motions from multiple subjects.