Accurate 3D pose estimation from a single depth image

  • Authors:
  • Mao Ye; Xianwang Wang;Ruigang Yang; Liu Ren;Marc Pollefeys

  • Affiliations:
  • University of Kentucky, USA;HP Labs, Palo Alto, USA;University of Kentucky, USA;Bosch Research, USA;ETH Zürich, Germany

  • Venue:
  • ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.