Statistical gesture models for 3d motion capture from a library of gestures with variants

  • Authors:
  • Zhenbo Li;Patrick Horain;André-Marie Pez;Catherine Pelachaud

  • Affiliations:
  • Institut Telecom, Telecom SudParis, Evry Cedex, France;Institut Telecom, Telecom SudParis, Evry Cedex, France;Institut Telecom, Telecom ParisTech, Paris Cedex 13, France;Institut Telecom, Telecom ParisTech, Paris Cedex 13, France

  • Venue:
  • GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A challenge for 3D motion capture by monocular vision is 3D-2D projection ambiguities that may bring incorrect poses during tracking. In this paper, we propose improving 3D motion capture by learning human gesture models from a library of gestures with variants. This library has been created with virtual human animations. Gestures are described as Gaussian Process Dynamic Models (GPDM) and are used as constraints for motion tracking. Given the raw input poses from the tracker, the gesture model helps to correct ambiguous poses. The benefit of the proposed method is demonstrated with results.