IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
The Development and Comparison of Robust Methodsfor Estimating the Fundamental Matrix
International Journal of Computer Vision
Event Detection and Analysis from Video Streams
IEEE Transactions on Pattern Analysis and Machine Intelligence
MPI-Video Infrastructure for Dynamic Environments
ICMCS '98 Proceedings of the IEEE International Conference on Multimedia Computing and Systems
Multiple perspective interactive video
ICMCS '95 Proceedings of the International Conference on Multimedia Computing and Systems
Human Motion Analysis: A Review
NAM '97 Proceedings of the 1997 IEEE Workshop on Motion of Non-Rigid and Articulated Objects (NAM '97)
Distributed vision system: a perceptual information infrastructure for robot navigation
IJCAI'97 Proceedings of the 15th international joint conference on Artifical intelligence - Volume 1
VAMBAM: view and motion -based aspect models for distributed omnidirectional vision systems
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Horizon matching for localizing unordered panoramic images
Computer Vision and Image Understanding
Hi-index | 0.00 |
Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these “invisible” cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios.