Context-Based Appearance Descriptor for 3D Human Pose Estimation from Monocular Images

  • Authors:
  • S. Sedai;M. Bennamoun;D. Huynh

  • Affiliations:
  • -;-;-

  • Venue:
  • DICTA '09 Proceedings of the 2009 Digital Image Computing: Techniques and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we propose a novel appearancedescriptorfor 3D human pose estimationfrom monocular images using a learning-basedtechnique. Our image-descriptor is based on theintermediate local appearance descriptorsthat we designto encapsulatelocal appearance context and to be resilient to noise. We encode the image by the histogram of such local appearancecontext descriptorscomputed in an imageto obtain the final image-descriptor for pose estimation. We name the final image-descriptor the Histogram of Local Appearance Context (HLAC).Wethen use Relevance Vector Machine (RVM)regressionto learn the directmapping between theproposedHLAC image-descriptor space and the 3D pose space. Given a test image, we first compute the HLAC descriptor and then input itto the trained regressor to obtain the final output pose in real time. We comparedour approachwith other methodsusing a synchronized video and 3D motion dataset. Wecompared our proposed HLAC image-descriptor withthe Histogram of Shape Context and Histogram of SIFTlike descriptors. The evaluation results show that HLAC descriptor outperforms both of them in the context of 3D Human pose estimation.