Spatial-Temporal correlatons for unsupervised action classification

  • Authors:
  • Silvio Savarese;Andrey DelPozo;Juan Carlos Niebles; Li Fei-Fei

  • Affiliations:
  • Beckman Institute, University of Illinois at Urbana Champaign, USA;Dept. of Computer Science, University of Illinois Urbana-Champaign, USA;Dept. of Computer Science, Princeton University, USA/ Robotics and Intelligent Systems Group, Universidad del Norte, Colombia;Dept. of Computer Science, Princeton University, USA

  • Venue:
  • WMVC '08 Proceedings of the 2008 IEEE Workshop on Motion and video Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Spatial-temporal local motion features have shown promising results in complex human action classification. Most of the previous works [6],[16],[21] treat these spatial-temporal features as a bag of video words, omitting any long range, global information in either the spatial or temporal domain. Other ways of learning temporal signature of motion tend to impose a fixed trajectory of the features or parts of human body returned by tracking algorithms. This leaves little flexibility for the algorithm to learn the optimal temporal pattern describing these motions. In this paper, we propose the usage of spatial-temporal correlograms to encode flexible long range temporal information into the spatial-temporal motion features. This results into a much richer description of human actions. We then apply an unsupervised generative model to learn different classes of human actions from these ST-correlograms. KTH dataset, one of the most challenging and popular human action dataset, is used for experimental evaluation. Our algorithm achieves the highest classification accuracy reported for this dataset under an unsupervised learning scheme.