Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
Introductory Techniques for 3-D Computer Vision
Introductory Techniques for 3-D Computer Vision
Multi-Camera Multi-Person Tracking for EasyLiving
VS '00 Proceedings of the Third IEEE International Workshop on Visual Surveillance (VS'2000)
Tracking Multiple People with a Multi-Camera System
WOMOT '01 Proceedings of the IEEE Workshop on Multi-Object Tracking (WOMOT'01)
A Distributed Visual Surveillance System
AVSS '03 Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance
Automatic Tracking of Human Motion in Indoor Scenes Across Multiple Synchronized Video Streams
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Consistent labeling of tracked objects in multiple cameras with overlapping fields of view
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
This paper describes a method for temporally calibrating video sequences from unsynchronized cameras by image processing operations, and presents two search algorithms to match and align trajectories across different camera views. Existing multi-camera systems assume that input video sequences are synchronized either by genlock or by time stamp information and a centralized server. Yet, hardware-based synchronization increases installation cost. Hence, using image information is necessary to align frames from the cameras whose clocks are not synchronized. The system built for temporal calibration is composed of three modules: object tracking module, calibration data extraction module, and the search module. A robust and efficient search algorithm is introduced that recovers the frame offset by matching the trajectories in different views, and finding the most reliable match. Thanks to information obtained from multiple trajectories, this algorithm is robust to possible errors in background subtraction and location extraction. Moreover, the algorithm can handle very large frame offsets. A RANdom SAmple Consensus (RANSAC) based version of this search algorithm is also introduced. Results obtained with different video sequences are presented, which show the robustness of the algorithms in recovering various range of frame offsets for video sequences with varying levels of object activity.