View invariant activity recognition with manifold learning

  • Authors:
  • Sherif Azary;Andreas Savakis

  • Affiliations:
  • Computing and Information Sciences and Computer Engineering, Rochester Institute of Technology, Rochester, NY;Computing and Information Sciences and Computer Engineering, Rochester Institute of Technology, Rochester, NY

  • Venue:
  • ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Activity recognition in complex scenes can be very challenging because human actions are unconstrained and may be observed from multiple views. While progress has been made in recognizing activities from fixed views, more research is needed in developing view invariant recognition methods. Furthermore, the recognition and classification of activities involves processing data in the space and time domains, which involves large amounts of data and can be computationally expensive to process. To accommodate for view invariance and high dimensional data we propose the use of Manifold Learning using Locality Preserving Projections (LPP). We develop an efficient set of features based on radial distance and present a Manifold Learning framework for learning low dimensional representations of action primitives that can be used to recognize activities at multiple views. Using our approach we present high recognition rates on the Inria IXMAS dataset.