A multimodal approach for dynamic event capture of vehicles and pedestrians

  • Authors:
  • Jeffrey Ploetner;Mohan M. Trivedi

  • Affiliations:
  • University of California, San Diego, La Jolla, CA;University of California, San Diego, La Jolla, CA

  • Venue:
  • Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an overview of a novel multimodal system being developed at UC San Diego for vehicle and pedestrian detection, event capture, and analysis. A Distributed Multimodal Array (DiMMA) framework is presented for sensory data acquisition, processing, analysis, fusion, and active control mechanisms needed to recognize objects, events, and activities which have multi-modal signatures. Current sensing modalities being researched include video, audio, seismic, laser ranging, magnetic, and passive infrared. Feature extraction and data fusion techniques are being investigated to improve robustness and study the advantages and disadvantages of each sensing modality. Preliminary results of this rapidly deployable system are discussed, along with possible future expansions, including geophones, pneumatic road tubes, and traditional inductive loops.