Human context: modeling human-human interactions for monocular 3d pose estimation

  • Authors:
  • Mykhaylo Andriluka;Leonid Sigal

  • Affiliations:
  • Max Planck Institute for Informatics, Saarbrücken, Germany;Disney Research, Pittsburgh

  • Venue:
  • AMDO'12 Proceedings of the 7th international conference on Articulated Motion and Deformable Objects
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic recovery of 3d pose of multiple interacting subjects from unconstrained monocular image sequence is a challenging and largely unaddressed problem. We observe, however, that by tacking the interactions explicitly into account, treating individual subjects as mutual "context" for one another, performance on this challenging problem can be improved. Building on this observation, in this paper we develop an approach that first jointly estimates 2d poses of people using multi-person extension of the pictorial structures model and then lifts them to 3d. We illustrate effectiveness of our method on a new dataset of dancing couples and challenging videos from dance competitions.