M2Tracker: A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene
International Journal of Computer Vision
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
A Conceptual Model for Surveillance Video Content and Event-Based Indexing and Retrieval
ICCS '02 Proceedings of the International Conference on Computational Science-Part I
Simulation of a Video Surveillance Network Using Remote Intelligent Security Cameras
ICN '01 Proceedings of the First International Conference on Networking-Part 2
Towards Monitoring Human Activities Using an Omnidirectional Camera
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Multi-view Moving Human Detection and Correspondence Based on Object Occupancy Random Field
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
A sparsity constrained inverse problem to locate people in a network of cameras
DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
A multiagent system for monitoring boats in marine reserves
ProMAS'09 Proceedings of the 7th international conference on Programming multi-agent systems
A Multi-agent Architecture Based on the BDI Model for Data Fusion in Visual Sensor Networks
Journal of Intelligent and Robotic Systems
Sparsity Driven People Localization with a Heterogeneous Network of Cameras
Journal of Mathematical Imaging and Vision
Lightweight agent framework for camera array applications
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part IV
Review: A survey of active and passive indoor localisation systems
Computer Communications
SCOOP: A Real-Time Sparsity Driven People Localization Algorithm
Journal of Mathematical Imaging and Vision
Hi-index | 0.00 |
We describe an architecture for implementing scene understanding algorithms in the visual surveillance domain. To achieve a high level description of events observed by multiple cameras, many inter-related, event-driven processes must be executed. We use the agent paradigm to provide a framework in which these processes can be managed. Each camera has an associated agent, which detects and tracks moving regions of interest. This is used to construct and update object agents. Each camera is calibrated so that image co-ordinates can be transformed into ground plane locations. By comparing properties, two object agents can infer that they have the same referent, i.e. that two cameras are observing the same entity, and as a consequence merge identities. Each object's trajectory is classified with a type of activity, with reference to a ground plane agent. This agent stores a hidden Markov model of learned activity patterns. We demonstrate objects simultaneously tracked in two cameras, which infer this shared observation. The combination of the agent framework, and visual surveillance application provides an excellent environment for development and evaluation of scene understanding algorithms.