Fusing appearance and distribution information of interest points for action recognition

  • Authors:
  • Matteo Bregonzio;Tao Xiang;Shaogang Gong

  • Affiliations:
  • School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK;School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK;School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK

  • Venue:
  • Pattern Recognition
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Most of the existing action recognition methods represent actions as bags of space-time interest points. Specifically, space-time interest points are detected from the video and described using appearance-based descriptors. Each descriptor is then classified as a video-word and a histogram of these video-words is used for recognition. These methods therefore rely solely on the discriminative power of individual local space-time descriptors, whilst ignoring the potentially useful information about the global spatio-temporal distribution of interest points. In this paper we propose a novel action representation method which differs significantly from the existing interest point based representation in that only the global distribution information of interest points is exploited. In particular, holistic features from clouds of interest points accumulated over multiple temporal scales are extracted. Since the proposed spatio-temporal distribution representation contains different but complementary information to the conventional Bag of Words representation, we formulate a feature fusion method based on Multiple Kernel Learning. Experiments using the KTH and WEIZMANN datasets demonstrate that our approach outperforms most existing methods, in particular under occlusion and changes in view angle, clothing, and carrying condition.