Multimedia surveillance systems
Proceedings of the third ACM international workshop on Video surveillance & sensor networks
A video streaming application for urban traffic management
Journal of Network and Computer Applications
Collaborative signal processing for target tracking in distributed wireless sensor networks
Journal of Parallel and Distributed Computing
A daily behavior enabled hidden Markov model for human behavior understanding
Pattern Recognition
A daily behavior enabled hidden Markov model for human behavior understanding
Pattern Recognition
Joint trajectory tracking and recognition based on bi-directional nonlinear learning
Image and Vision Computing
Learning to recognize video-based spatiotemporal events
IEEE Transactions on Intelligent Transportation Systems
Protocols for traffic safety using wireless sensor network
ICA3PP'07 Proceedings of the 7th international conference on Algorithms and architectures for parallel processing
Understanding transit scenes: a survey on human behavior-recognition algorithms
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems
Multimedia Databases and Data Management: A Survey
International Journal of Multimedia Data Engineering & Management
Journal of Signal Processing Systems
Internet of Things (IoT): A vision, architectural elements, and future directions
Future Generation Computer Systems
Hi-index | 0.00 |
Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence-based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera's FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian-vehicle interaction and vehicle-checkpost interactions.