Human attributes from 3D pose tracking

  • Authors:
  • Micha Livne;Leonid Sigal;Nikolaus F. Troje;David J. Fleet

  • Affiliations:
  • Department of Computer Science, University of Toronto, 6 King's College Rd, Toronto, Ontario, Canada M5S 3H5;Department of Computer Science, University of Toronto, 6 King's College Rd, Toronto, Ontario, Canada M5S 3H5 and Disney Research, 4720 Forbes Ave. Pittsburgh, PA 15213, United States;Department of Psychology and School of Computing, Queen's University, Kingston, Ontario K7M3N6, Canada;Department of Computer Science, University of Toronto, 6 King's College Rd, Toronto, Ontario, Canada M5S 3H5

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is well known that biological motion conveys a wealth of socially meaningful information. From even a brief exposure, biological motion cues enable the recognition of familiar people, and the inference of attributes such as gender, age, mental state, actions and intentions. In this paper we show that from the output of a video-based 3D human tracking algorithm we can infer physical attributes (e.g., gender and weight) and aspects of mental state (e.g., happiness or sadness). In particular, with 3D articulated tracking we avoid the need for view-based models, specific camera viewpoints, and constrained domains. The task is useful for man-machine communication, and it provides a natural benchmark for evaluating the performance of 3D pose tracking methods (vs. conventional Euclidean joint error metrics). We show results on a large corpus of motion capture data and on the output of a simple 3D pose tracker applied to videos of people walking.