Multiple Camera Fusion for Multi-Object Tracking
WOMOT '01 Proceedings of the IEEE Workshop on Multi-Object Tracking (WOMOT'01)
Learning to Track Objects Through Unobserved Regions
WACV-MOTION '05 Proceedings of the IEEE Workshop on Motion and Video Computing (WACV/MOTION'05) - Volume 2 - Volume 02
Inference of Non-Overlapping Camera Network Topology by Measuring Statistical Dependence
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Principal Axis-Based Correspondence between Multiple Cameras for People Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Person Reidentification Using Spatiotemporal Appearance
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Planning Algorithms
Computer Vision and Image Understanding
Viewpoint Invariant Pedestrian Recognition with an Ensemble of Localized Features
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Object identification in a Bayesian context
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Tracking and activity recognition through consensus in distributed camera networks
IEEE Transactions on Image Processing - Special section on distributed camera networks: sensing, processing, communication, and implementation
Bridging the gaps between cameras
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Automated multi-camera planar tracking correspondence modeling
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
A vision-based approach to collision prediction at traffic intersections
IEEE Transactions on Intelligent Transportation Systems
Consistent labeling of tracked objects in multiple cameras with overlapping fields of view
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
This paper focuses on the unexplored problem of inferring motion of objects that are invisible to all cameras in a multiple camera setup. As opposed to methods for learning relationships between disjoint cameras, we take the next step to actually infer the exact spatiotemporal behavior of objects while they are invisible. Given object trajectories within disjoint cameras' FOVs (field-of-view), we introduce constraints on the behavior of objects as they travel through the unobservable areas that lie in between. These constraints include vehicle following (the trajectories of vehicles adjacent to each other at entry and exit are time-shifted relative to each other), collision avoidance (no two trajectories pass through the same location at the same time) and temporal smoothness (restricts the allowable movements of vehicles based on physical limits). The constraints are embedded in a generalized, global cost function for the entire scene, incorporating influences of all objects, followed by a bounded minimization using an interior point algorithm, to obtain trajectory representations of objects that define their exact dynamics and behavior while invisible. Finally, a statistical representation of motion in the entire scene is estimated to obtain a probabilistic distribution representing individual behaviors, such as turns, constant velocity motion, deceleration to a stop, and acceleration from rest for evaluation and visualization. Experiments are reported on real world videos from multiple disjoint cameras in NGSIM data set, and qualitative as well as quantitative analysis confirms the validity of our approach.