Attention-from-motion: A factorization approach for detecting attention objects in motion
Computer Vision and Image Understanding
Camera-to-camera mapping for hybrid pan-tilt-zoom sensors calibration
SCIA'07 Proceedings of the 15th Scandinavian conference on Image analysis
Exploiting distinctive visual landmark maps in pan-tilt-zoom camera networks
Computer Vision and Image Understanding
Context-awareness at the service of sensor fusion systems: inverting the usual scheme
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
Review: on the use of agent technology in intelligent, multisensory and distributed surveillance
The Knowledge Engineering Review
Real-time visuomotor update of an active binocular head
Autonomous Robots
Automatic unconstrained online configuration of a master-slave camera system
ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
Hi-index | 0.00 |
In this paper we address the problem of establishing a computational model for visual attention using cooperation between two cameras. More specifically we wish to maintain a visual event within the field of view of a rotating and zooming camera through the understanding and modeling of the geometric and kinematic coupling between a static camera and an active camera. The static camera has a wide field of view thus allowing panoramic surveillance at low resolution. High-resolution details may be captured by a second camera, provided that it looks in the right direction. We derive an algebraic formulation for the coupling between the two cameras and we specify the practical conditions yielding a unique solution. We describe a method for separating a foreground event (such as a moving object) from its background while the camera rotates. A set of outdoor experiments shows the two-camera system in operation.