Matching actions in presence of camera motion
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Euclidean path modeling for video surveillance
Image and Vision Computing
Collaborative microdrones: applications and research challenges
Autonomics '08 Proceedings of the 2nd International Conference on Autonomic Computing and Communication Systems
Distributed visual-target-surveillance system in wireless sensor networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of tracking objects across multiple moving airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple correspondence hypotheses, without assuming any prior calibration information. We propose a statistically and geometrically meaningful means of evaluating a hypothesized correspondence between two observations in different cameras. Second, since multiple cameras exist, ensuring coherency in correspondence, i.e. transitive closure is maintained between more than two cameras, is an essential requirement. To ensure such coherency we pose the problem of object tracking across cameras as a k-dimensional matching and use an approximation to find the Maximum Likelihood assignment of correspondence. Third, we show that as a result of tracking objects across the cameras, a concurrent visualization of multiple aerial video streams is possible. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models.