M2Tracker: A Multi-View Approach to Segmenting and Tracking People in a Cluttered Scene
International Journal of Computer Vision
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
An integrated multi-modal sensor network for video surveillance
Proceedings of the third ACM international workshop on Video surveillance & sensor networks
A system for automatic face obscuration for privacy purposes
Pattern Recognition Letters
A system for automatic face obscuration for privacy purposes
Pattern Recognition Letters - Special issue on vision for crime detection and prevention
EURASIP Journal on Applied Signal Processing
HECOL: Homography and epipolar-based consistent labeling for outdoor park surveillance
Computer Vision and Image Understanding
Object matching in disjoint cameras using a color transfer approach
Machine Vision and Applications
System and software architectures of distributed smart cameras
ACM Transactions on Embedded Computing Systems (TECS)
Tracking in a Dense Crowd Using Multiple Cameras
International Journal of Computer Vision
Consistent labeling for multi-camera object tracking
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Localizing people in multi-view environment using height map reconstruction in real-time
Pattern Recognition Letters
Hi-index | 0.00 |
Abstract: We describe an algorithm for detecting and tracking multiple people in a cluttered scene using multiple synchronized cameras located far away from each other. This camera arrangement results in multiple wide-baseline camera systems. We segment each image, and then, for each pair we compare regions across the views along epipolar lines. The centers of the matching segments are then back-projected to identify 3D points in the scene potentially corresponding to people. These 3D points are then projected onto the ground plane. The results from these wide-baseline camera systems are then combined using a scheme that rejects outliers and gives very robust estimates of the 2D locations of the people. These estimates are then used to track people across time. We have found that the algorithm works quite well in practice in scenes containing multiple people even when they occlude each other in every camera view.