Human skeleton tracking from depth data using geodesic distances and optical flow

  • Authors:
  • Loren Arthur Schwarz;Artashes Mkhitaryan;Diana Mateus;Nassir Navab

  • Affiliations:
  • -;-;-;-

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a method for human full-body pose estimation from depth data that can be obtained using Time of Flight (ToF) cameras or the Kinect device. Our approach consists of robustly detecting anatomical landmarks in the 3D data and fitting a skeleton body model using constrained inverse kinematics. Instead of relying on appearance-based features for interest point detection that can vary strongly with illumination and pose changes, we build upon a graph-based representation of the depth data that allows us to measure geodesic distances between body parts. As these distances do not change with body movement, we are able to localize anatomical landmarks independent of pose. For differentiation of body parts that occlude each other, we employ motion information, obtained from the optical flow between subsequent intensity images. We provide a qualitative and quantitative evaluation of our pose tracking method on ToF and Kinect sequences containing movements of varying complexity.