Learning hierarchical poselets for human parsing

  • Authors:
  • Yang Wang; Duan Tran; Zicheng Liao

  • Affiliations:
  • Dept. of Comput. Sci., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA;Dept. of Comput. Sci., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA;Dept. of Comput. Sci., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets-a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.