Locating human hands for real-time pose estimation from monocular video

  • Authors:
  • Xin Lian;Qingmin Liao

  • Affiliations:
  • Tsinghua University, China;Tsinghua University, China

  • Venue:
  • Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a real-time system to detect and estimate the pose of human upper body from a monocular video. A novel approach to locate the hands is proposed, which is designed to cope with the complicated situations such as short sleeves, fast motion and occlusion. Human silhouette and skin color blobs are extracted from the frames of the video; then candidate locations of head, hands, and elbows are chosen and evaluated by an inverse kinematics based strategy. Experiments demonstrate the efficacy and robustness of this approach. The algorithm is developed for a camera-based tennis game, in which poses of a player have to be estimated in real time (for avatar animation, action recognition, etc). It can also be applied in other human-computer interaction applications.