Topological mapping for mobile robots using a combination of sonar and vision sensing
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Visual Homing: Surfing on the Epipoles
International Journal of Computer Vision
An Affine Invariant Interest Point Detector
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Towards a general theory of topological maps
Artificial Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Active Appearance-Based Robot Localization Using Stereo Vision
Autonomous Robots
Fusing Points and Lines for High Performance Tracking
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Interactive Feature Tracking using K-D Trees and Dynamic Programming
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance
International Journal of Robotics Research
Estimating the absolute position of a mobile robot using position probability grids
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Robot navigation with weak sensors
AAMAS'11 Proceedings of the 10th international conference on Advanced Agent Technology
Hi-index | 0.00 |
Appearance-based localization compares the current image taken from a robot's camera to a set of pre-recorded images in order to estimate the current location of the robot. Such techniques often maintain a graph of images, modeling the dynamics of the image sequence. This graph is used to navigate in the space of images. In this paper we bring a set of techniques together, including Partially-Observable Markov Decision Processes, hierarchical state representations, visual homing, human-robot interactions, and so forth, into the appearance-based approach. Our approach provides a complete solution to the deployment of a robot in a relatively small environment, such as a house, or a work place, allowing the robot to robustly navigate the environment after minimal training. We demonstrate our approach in two environments using a real robot, showing how after a short training session, the robot is able to navigate well in the environment.