Modeling temporal structure of decomposable motion segments for activity classification

  • Authors:
  • Juan Carlos Niebles;Chih-Wei Chen;Li Fei-Fei

  • Affiliations:
  • Stanford University, Stanford, CA and Princeton University, Princeton, NJ and Universidad del Norte, Barranquilla, Colombia;Stanford University, Stanford, CA;Stanford University, Stanford, CA

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Much recent research in human activity recognition has focused on the problem of recognizing simple repetitive (walking, running, waving) and punctual actions (sitting up, opening a door, hugging). However, many interesting human activities are characterized by a complex temporal composition of simple actions. Automatic recognition of such complex actions can benefit from a good understanding of the temporal structures. We present in this paper a framework for modeling motion by exploiting the temporal structure of the human activities. In our framework, we represent activities as temporal compositions of motion segments. We train a discriminative model that encodes a temporal decomposition of video sequences, and appearance models for each motion segment. In recognition, a query video is matched to the model according to the learned appearances and motion segment decomposition. Classification is made based on the quality of matching between the motion segment classifiers and the temporal segments in the query sequence. To validate our approach, we introduce a new dataset of complex Olympic Sports activities. We show that our algorithm performs better than other state of the art methods.