Kinematic self retargeting: A framework for human pose estimation

  • Authors:
  • Youding Zhu;Behzad Dariush;Kikuo Fujimura

  • Affiliations:
  • The Ohio State University, Dreese Laboratory 395, 2015 Neil Ave., Columbus, OH 43210, USA;Honda Research Institute USA, 425 National Ave. Suite 100, Mountain View, CA 94043, USA;Honda Research Institute USA, 425 National Ave. Suite 100, Mountain View, CA 94043, USA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a model-based, Cartesian control theoretic approach for estimating human pose from a set of key features points (key-points) detected using depth images obtained from a time-of-flight imaging device. The key-points represent positions of anatomical landmarks, detected and tracked over time based on a probabilistic inferencing algorithm that is robust to partial occlusions and capable of resolving ambiguities in detection. The detected key-points are subsequently kinematically self retargeted, or mapped to the subject's own kinematic model, in order to predict the pose of an articulated human model at the current state, resolve ambiguities in key-point detection, and provide estimates of missing or intermittently occluded key-points. Based on a standard kinematic and mesh model of a human, constraints such as joint limit avoidance, and self-penetration avoidance are enforced within the retargeting framework. Effectiveness of the algorithm is demonstrated experimentally for upper and full-body pose reconstruction from a small set of detected key-points. On average, the proposed algorithm runs at approximately 10 frames per second for the upper-body and 5 frames per second for whole body reconstruction on a standard 2.13GHz laptop PC.