Auto learning temporal atomic actions for activity classification

  • Authors:
  • Jiangen Zhang;Benjamin Yao;Yongtian Wang

  • Affiliations:
  • Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, Beijing Institute of Technology, Beijing 100081, China;Department of Statistics, University of California, Los Angeles, Los Angeles, CA 90095, Unites States;Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, Beijing Institute of Technology, Beijing 100081, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we present a model for learning atomic actions for complex activities classification. A video sequence is first represented by a collection of visual interest points. Then the model automatically clusters visual words into atomic actions (topics) based on their co-occurrence and temporal proximity in the same activity category using an extension of hierarchical Dirichlet process (HDP) mixture model. Our approach is robust to noisy interest points caused by various conditions because HDP is a generative model. Finally, we use both a Naive Bayesian and a linear SVM classifier for the problem of activity classification. We first use the intermediate result of a synthetic example to demonstrate the superiority of our model, then we apply our model on the complex Olympic Sport 16-class dataset and show that it outperforms other state-of-art methods.