Action recognition using linear dynamic systems

  • Authors:
  • Haoran Wang;Chunfeng Yuan;Guan Luo;Weiming Hu;Changyin Sun

  • Affiliations:
  • School of Automation, Southeast University, Nanjing, China and National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China;School of Automation, Southeast University, Nanjing, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we propose a novel approach based on Linear Dynamic Systems (LDSs) for action recognition. Our main contributions are two-fold. First, we introduce LDSs to action recognition. LDSs describe the dynamic texture which exhibits certain stationarity properties in time. They are adopted to model the spatiotemporal patches which are extracted from the video sequence, because the spatiotemporal patch is more analogous to a linear time invariant system than the video sequence. Notably, LDSs do not live in the Euclidean space. So we adopt the kernel principal angle to measure the similarity between LDSs, and then the multiclass spectral clustering is used to generate the codebook for the bag of features representation. Second, we propose a supervised codebook pruning method to preserve the discriminative visual words and suppress the noise in each action class. The visual words which maximize the inter-class distance and minimize the intra-class distance are selected for classification. Our approach yields the state-of-the-art performance on three benchmark datasets. Especially, the experiments on the challenging UCF Sports and Feature Films datasets demonstrate the effectiveness of the proposed approach in realistic complex scenarios.