Navigation and mapping in large-scale space
AI Magazine
Qualitative navigation for mobile robots
Artificial Intelligence
Map learning with uninterpreted sensors and effectors
Artificial Intelligence
The spatial semantic hierarchy
Artificial Intelligence
Learning View Graphs for Robot Navigation
Autonomous Robots - Special issue on autonomous agents
Handbook of Computer Vision Algorithms in Image Algebra
Handbook of Computer Vision Algorithms in Image Algebra
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Robot Homing by Exploiting Panoramic Vision
Autonomous Robots
Vision-based global localization and mapping for mobile robots
IEEE Transactions on Robotics
Hi-index | 0.00 |
In this article, we propose a new approach to the map building task: the implementation of the Spatial Semantic Hierarchy (SSH), proposed by B. Kuipers, on a real robot fitted with an omnidirectional camera. The original Kuiper's formulation of the SSH was slightly modified, in order to manage in a more efficient way the knowledge the real robot collects while moving in the environment. The sensory data experienced by the robot are transformed by the different levels of the SSH in order to obtain a compact representation of the environment. This knowledge is stored in the form of a topological map and, eventually, of a metrical map. The aim of this article is to show that a catadioptric omnidirectional camera is a good sensor for the SSH and nicely couples with several elements of the SSH. The panoramic view and rotational invariance of our omnidirectional camera makes the identification and labelling of places a simple matter. A deeper insight is that the tracking and identification of events on an omnidirectional image such as occlusions and alignments can be used for the segmentation of continuous sensory image data into the discrete topological and metric elements of a map. The proposed combination of the SSH and omnidirectional vision provides a powerful general framework for robot maping and offers new insights into the concept of “place.” Some preliminary experiments performed with a real robot in an unmodified office environment are presented.