Dynamic scene understanding by improved sparse topical coding

  • Authors:
  • Wei Fu;Jinqiao Wang;Hanqing Lu;Songde Ma

  • Affiliations:
  • National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

The explosive growth of cameras in public areas demands a technique which develops a fully automated surveillance and monitoring system. In this paper, we propose a novel unsupervised approach to automatically explore motion patterns occurring in dynamic scenes under an improved sparse topical coding (STC) framework. Given an input video, it is segmented into a sequence of clips without overlapping. Optical flow features are extracted from each pair of consecutive frames, and quantized into discrete visual flow words. Each video clip is interpreted as a document and visual flow words as words within the document. Then the improved STC is applied to explore latent patterns which represent the common motion distributions of the scene. Finally, each video clip is represented as a weighted summation of these patterns with only a few non-zero coefficients. The proposed approach is purely data-driven and scene independent, which make it suitable for very large range applications of scenarios, such as rule mining and abnormal event detection. Experimental results and comparisons on various public datasets demonstrate the promise of the proposed approach.