Activity Representation in Crowd

  • Authors:
  • Yunqian Ma;Petr Cisar

  • Affiliations:
  • Honeywell Labs, Golden Valley, USA MN 55422;Honeywell Prague Laboratory, Prague, Czech Republic 14800

  • Venue:
  • SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video surveillance of large facilities, such as airports, rail stations and casinos, is developing rapidly. Cameras installed at such locations often overlook large crowds, which makes problems such as activity and scene understanding very challenging. Traditional activity recognition methods which rely on input from lower level processing units dealing with background subtraction and tracking, are unable to cope with frequent occlusions in such scenes. In this paper, we propose a novel activity representation and recognition method that bypasses these commonly used low level modules. We model each local spatio-temporal patch as a dynamic texture. Using a Martin distance metric to compare two patches based on their estimated dynamic texture parameters, we present a method to temporally stitch together local regions to form activity streamlines and represent each streamline by its constituent dynamic textures. This allows us to seamlessly perform activity recognition without explicitly detecting individuals in the scene. We demonstrate our method on multiple real data sets and show promising results.