Self-Organizing Maps for Pose Estimation with a Time-of-Flight Camera

  • Authors:
  • Martin Haker;Martin Böhme;Thomas Martinetz;Erhardt Barth

  • Affiliations:
  • Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck, Germany 23538;Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck, Germany 23538;Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck, Germany 23538;Institute for Neuro- and Bioinformatics, University of Lübeck, Lübeck, Germany 23538

  • Venue:
  • Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a technique for estimating human pose from an image sequence captured by a time-of-flight camera. The pose estimation is derived from a simple model of the human body that we fit to the data in 3D space. The model is represented by a graph consisting of 44 vertices for the upper torso, head, and arms. The anatomy of these body parts is encoded by the edges, i.e. an arm is represented by a chain of pairwise connected vertices whereas the torso consists of a 2-dimensional grid. The model can easily be extended to the representation of legs by adding further chains of pairwise connected vertices to the lower torso. The model is fit to the data in 3D space by employing an iterative update rule common to self-organizing maps. Despite the simplicity of the model, it captures the human pose robustly and can thus be used for tracking the major body parts, such as arms, hands, and head. The accuracy of the tracking is around 5---6 cm root mean square (RMS) for the head and shoulders and around 2 cm RMS for the head. The implementation of the procedure is straightforward and real-time capable.