Scene and Content Analysis from Multiple Video Streams

  • Authors:
  • S. Guler

  • Affiliations:
  • -

  • Venue:
  • AIPR '01 Proceedings of the 30th on Applied Imagery Pattern Recognition Workshop
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe a framework for videoanalysis and a method to detect and understandthe class of events we refer to as "split and mergeevents" from single or multiple video streams. Westart with automatic detection of scene changes,including camera operations such as zoom, pan,tilts and scene cuts. For each new scene, cameracalibration is performed, the scene geometry isestimated, to determine the absolute positions foreach detected object. Objects in the video scenesare detected using an adaptive backgroundsubtraction method and tracked over consecutiveframes. Objects are detected and tracked in a wayto identify the key split and merge behaviors whereone object splits into two or more objects and twoor more objects merge into one object. We haveidentified split and merge behaviors as the keybehavior components for several higher levelactivities such package drop-off, exchangebetween people, people getting out of cars orforming crowds etc. We embed the data about scenes,camera parameters, object features, positions into thevideo stream as metadata to correlate, compare andassociate the results for several related scenes andachieve better video event understanding. This locationfor the detailed syntactic information allows it to bephysically associated with the video itself andguarantees that analysis results will be preserved whilein archival storage or when sub-clips are created fordistribution to other users. We present somepreliminary results over single and multiple videostreams.