Recovering the Basic Structure of Human Activities from a Video-Based Symbol String

  • Authors:
  • Kris M. Kitani;Yoichi Sato;Akihiro Sugimoto

  • Affiliations:
  • University of Tokyo;University of Tokyo;National Institute of Informatics, Japan

  • Venue:
  • WMVC '07 Proceedings of the IEEE Workshop on Motion and Video Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years stochastic context-free grammars have been shown to be effective in modeling human activities because of the hierarchical structures they represent. However, most of the research in this area has yet to address the issue of learning the activity grammars from a noisy input source, namely, video. In this paper, we present a framework for identifying noise and recovering the basic activity grammar from a noisy symbol string produced by video. We identify the noise symbols by finding the set of non-noise symbols that optimally compresses the training data, where the optimality of compression is measured using an MDL criterion. We show the robustness of our system to noise and its effectiveness in learning the basic structure of human activity, through an experiment with real video from a local convenience store.