A qualitative physics based on confluences
Artificial Intelligence - Special volume on qualitative reasoning about physical systems
Artificial Intelligence
Texture discrimination by Gabor functions
Biological Cybernetics
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised texture segmentation using Gabor filters
Pattern Recognition
Globally Consistent Range Scan Alignment for Environment Mapping
Autonomous Robots
The use of a Reasoning process to solve the almost SLAM Challenge at the Robocup legged league
Proceedings of the 2005 conference on Artificial Intelligence Research and Development
Hi-index | 0.00 |
Up to now, the Simultaneous Localization and Map Building problem for autonomous navigation has been solved by using quantitative (probabilistic) approaches at a high computational cost and at low level of abstraction. Our interest is to use hybrid (qualitative + quantitative) representation and reasoning models to fulfil these drawbacks. In this paper we present a novel cognitive vision system to capture information from the environment for map building. The cognitive vision module is based on a qualitative 3D model which generates the qualitative textures, through a temporal Gabor transform, which the robot should be seeing starting from the qualitative-quantitative map of the environment, and compares them with the real textures as seen from the robot camera. Different hypothesis are generated to explain these differences, which can be classified in errors of the textures obtained from the camera, which are ignored, and errors in the hybrid map of the environment, which are used to propose modifications of this map. The advantages of our model are: (1) we achieve a high degree of tolerance against visual recognition errors, because the input from the camera is filtered by the qualitative image generated by our model; (2) the continuous matching of the qualitative images generated by the model with the images obtained for the camera allows us to understand the video sequence seen for the robot, offering the qualitative model as the cognitive interpretation of the scene; (3) the information of the cameras is never directly used to control the robot, but only once it has been interpreted as a meaningful modification of the current hybrid map, allowing sensor independence. We have implemented the cognitive vision module to solve the Simultaneous Localization and Map Building problem for autonomous navigation of a Sony AIBO four legged robot on an unknown labyrinth, made of rectangular walls with a homogeneous but unknown texture on a floor with a different texture.