Tracking human pose with multiple activity models

  • Authors:
  • John Darby;Baihua Li;Nicholas Costen

  • Affiliations:
  • Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK;Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK;Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester, M1 5GD, UK

  • Venue:
  • Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

Tracking unknown human motions using generative tracking techniques requires the exploration of a high-dimensional pose space which is both difficult and computationally expensive. Alternatively, if the type of activity is known and training data is available, a low-dimensional latent pose space may be learned and the difficulty and cost of the estimation task reduced. In this paper we attempt to combine the competing benefits-flexibility and efficiency-of these two generative tracking scenarios within a single approach. We define a number of ''activity models'', each composed of a pose space with unique dimensionality and an associated dynamical model, and each designed for use in the recovery of a particular class of activity. We then propose a method for the fair combination of these activity models for use in particle dispersion by an annealed particle filter. The resulting algorithm, which we term multiple activity model annealed particle filtering (MAM-APF), is able to dynamically vary the scope of its search effort, using a small number of particles to explore latent pose spaces and a large number of particles to explore the full pose space. We present quantitative results on the HumanEva-I and HumanEva-II datasets, demonstrating robust 3D tracking of known and unknown activities from fewer than four cameras.