Monocular tracking with a mixture of view-dependent learned models

  • Authors:
  • Tobias Jaeggli;Esther Koller-Meier;Luc Van Gool

  • Affiliations:
  • D-ITET/BIWI, ETH Zurich, Zurich;D-ITET/BIWI, ETH Zurich, Zurich;D-ITET/BIWI, ETH Zurich, Zurich

  • Venue:
  • AMDO'06 Proceedings of the 4th international conference on Articulated Motion and Deformable Objects
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper considers the problem of monocular human body tracking using learned models. We propose to learn the joint probability distribution of appearance and body pose using a mixture of view-dependent models. In such a way the multimodal and nonlinear relationships can be captured reliably. We formulate inference algorithms that are based on generative models while exploiting the advantages of a learned model when compared to the traditionally used geometric body models. Given static images or sequences, body poses and bounding box locations are inferred using silhouette based image descriptors. Prior information about likely body poses and a motion model are taken into account. We consider analytical computations and Monte-Carlo techniques, as well as a combination of both. In a Rao-Blackwellised particle filter, the tracking problem is partitioned into a part that is solved analytically, and a part that is solved with particle filtering. Tracking results are reported for human locomotion