View and style-independent action manifolds for human activity recognition

  • Authors:
  • Michał Lewandowski;Dimitrios Makris;Jean-Christophe Nebel

  • Affiliations:
  • Digital Imaging Research Centre, Kingston University, London, United Kingdom;Digital Imaging Research Centre, Kingston University, London, United Kingdom;Digital Imaging Research Centre, Kingston University, London, United Kingdom

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a novel approach to automatically learn intuitive and compact descriptors of human body motions for activity recognition. Each action descriptor is produced, first, by applying Temporal Laplacian Eigenmaps to view-dependent videos in order to produce a stylistic invariant embedded manifold for each view separately. Then, all view-dependent manifolds are automatically combined to discover a unified representation which model in a single three dimensional space an action independently from style and viewpoint. In addition, a bidirectional nonlinear mapping function is incorporated to allow projecting actions between original and embedded spaces. The proposed framework is evaluated on a real and challenging dataset (IXMAS), which is composed of a variety of actions seen from arbitrary viewpoints. Experimental results demonstrate robustness against style and view variation and match the most accurate action recognition method.