Some location problems for robot navigation using a single camera
Computer Vision, Graphics, and Image Processing
Use of the Hough transformation to detect lines and curves in pictures
Communications of the ACM
Navigating Mobile Robots: Systems and Techniques
Navigating Mobile Robots: Systems and Techniques
A Two-Tiered Approach to Self-Localization
RoboCup 2001: Robot Soccer World Cup V
Fast Image Segmentation, Object Recognition and Localization in a RoboCup Scenario
RoboCup-99: Robot Soccer World Cup III
A Localization Method for a Soccer Robot Using a Vision-Based Omni-Directional Sensor
RoboCup 2000: Robot Soccer World Cup IV
Landmark-Based Robot Self-Localization: A Case Study for the RoboCup Goal-Keeper
ICIIS '99 Proceedings of the 1999 International Conference on Information Intelligence and Systems
Kinematic Analysis, Obstacle Avoidance and Self-localization for a Mobile Robot
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Stereo vision based self-localization of autonomous mobile robots
RobVis'08 Proceedings of the 2nd international conference on Robot vision
Active single landmark based global localization of autonomous mobile robots
ISVC'06 Proceedings of the Second international conference on Advances in Visual Computing - Volume Part I
Hi-index | 0.00 |
We present a stereo vision based global self-localization strategy for tiny autonomous mobile robots in a well-known dynamic environment. Global localization is required for an initial startup or when the robot loses track of its pose during navigation. Existing approaches are based on dense range scans, active beacon systems, artificial landmarks, bearing measurements using omni-directional cameras or bearing/range calculation using single frontal cameras, while we propose feature based stereo vision system for range calculation. Location of the robot is estimated using range measurements with respect to distinct landmarks such as color transitions, corners, junctions and line intersections. Unlike methods based on angle measurement, this method requires only two distinct landmarks. Simulation results show that robots can successfully localize themselves whenever two distinct landmarks are observed. As such marked minimization of landmarks for vision based self-localization of robots has been achieved.