Learning Patterns of Activity Using Real-Time Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multicamera People Tracking with a Probabilistic Occupancy Map
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using Multi-view Recognition and Meta-data Annotation to Guide a Robot's Attention
International Journal of Robotics Research
Globally optimal multi-target tracking on a hexagonal lattice
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Crowd detection with a multiview sampler
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
MediaDiver: viewing and annotating multi-view video
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Multi-camera people localization and height estimation using multiple birth-and-death dynamics
ACCV'10 Proceedings of the 2010 international conference on Computer vision - Volume Part I
A 3-D marked point process model for multi-view people detection
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Evaluation of manually created ground truth for multi-view people localization
Proceedings of the International Workshop on Video and Image Ground Truth in Computer Vision Applications
Hi-index | 0.00 |
In this paper we introduce a novel multi-view annotation tool for generating 3D ground truth data of the real location of people in the scene. The proposed tool allows the user to accurately select the ground occupancy of people by aligning an oriented rectangle on the ground plane. In addition, the height of the people can also be adjusted. In order to achieve precise ground truth data the user is aided by the video frames of multiple synchronized and calibrated cameras. Finally, the 3D annotation data can be easily converted to 2D image positions using the available calibration matrices. One key advantage of the proposed technique is that different methods can be compared against each other, whether they estimate the real world ground position of people or the 2D position on the camera images. Therefore, we defined two different error metrics, which quantitatively evaluate the estimated positions. We used the proposed tool to annotate two publicly available datasets, and evaluated the metrics on two state of the art algorithms.