Probabilistic structure matching for visual SLAM with a multi-camera rig
Computer Vision and Image Understanding
Undelayed initialization of line segments in monocular SLAM
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Distributed multi-camera visual mapping using topological maps of planar regions
Pattern Recognition
Large scale multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain
Robotics and Autonomous Systems
Towards realtime handheld MonoSLAM in dynamic environments
ISVC'11 Proceedings of the 7th international conference on Advances in visual computing - Volume Part I
Impact of Landmark Parametrization on Monocular EKF-SLAM with Points and Lines
International Journal of Computer Vision
ACIVS'12 Proceedings of the 14th international conference on Advanced Concepts for Intelligent Vision Systems
Cross-spectral visual simultaneous localization and mapping (SLAM) with sensor handover
Robotics and Autonomous Systems
STORE VIEW: pervasive RFID & indoor navigation based retail inventory management
Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
A fast vision system for soccer robot
Applied Bionics and Biomechanics - Personal Care Robotics
Hi-index | 0.00 |
This paper explores the possibilities of using monocular simultaneous localization and mapping (SLAM) algorithms in systems with more than one camera. The idea is to combine in a single system the advantages of both monocular vision (bearings-only, infinite range observations but no 3-D instantaneous information) and stereovision (3-D information up to a limited range). Such a system should be able to instantaneously map nearby objects while still considering the bearing information provided by the observation of remote ones. We do this by considering each camera as an independent sensor rather than the entire set as a monolithic supersensor. The visual data are treated by monocular methods and fused by the SLAM filter. Several advantages naturally arise as interesting possibilities, such as the desynchronization of the firing of the sensors, the use of several unequal cameras, self-calibration, and cooperative SLAM with several independently moving cameras. We validate the approach with two different applications: a stereovision SLAM system with automatic self-calibration of the rig's main extrinsic parameters and a cooperative SLAM system with two independent free-moving cameras in an outdoor setting.