Learning Articulated Structure and Motion

  • Authors:
  • David A. Ross;Daniel Tarlow;Richard S. Zemel

  • Affiliations:
  • University of Toronto, Toronto, Canada M5S 3G4;University of Toronto, Toronto, Canada M5S 3G4;University of Toronto, Toronto, Canada M5S 3G4

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Humans demonstrate a remarkable ability to parse complicated motion sequences into their constituent structures and motions. We investigate this problem, attempting to learn the structure of one or more articulated objects, given a time series of two-dimensional feature positions. We model the observed sequence in terms of "stick figure" objects, under the assumption that the relative joint angles between sticks can change over time, but their lengths and connectivities are fixed. The problem is formulated as a single probabilistic model that includes multiple sub-components: associating the features with particular sticks, determining the proper number of sticks, and finding which sticks are physically joined. We test the algorithm on challenging datasets of 2D projections of optical human motion capture and feature trajectories from real videos.