Distributed Interactive Video Arrays for Event Capture and Enhanced Situational Awareness
IEEE Intelligent Systems
Vehicle Reidentification using multidetector fusion
IEEE Transactions on Intelligent Transportation Systems
Dynamic context capture and distributed video arrays for intelligent spaces
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Towards collaborative data reduction in stream-processing systems
International Journal of Communication Networks and Distributed Systems
Hi-index | 0.00 |
This paper presents an overview of a novel multimodal system being developed at UC San Diego for vehicle and pedestrian detection, event capture, and analysis. A Distributed Multimodal Array (DiMMA) framework is presented for sensory data acquisition, processing, analysis, fusion, and active control mechanisms needed to recognize objects, events, and activities which have multi-modal signatures. Current sensing modalities being researched include video, audio, seismic, laser ranging, magnetic, and passive infrared. Feature extraction and data fusion techniques are being investigated to improve robustness and study the advantages and disadvantages of each sensing modality. Preliminary results of this rapidly deployable system are discussed, along with possible future expansions, including geophones, pneumatic road tubes, and traditional inductive loops.