Cognitive Vision based on Qualitative Matching of Visual Textures and Envision Predictions for Aibo Robots

  • Authors:
  • David A. Graullera;Salvador Moreno;M. Teresa Escrig

  • Affiliations:
  • Dpto. de Informática, Universitat de València, Paterna, Valencia, (Spain);Dpto. de Informática, Universitat de València, Paterna, Valencia, (Spain);Dpto. Ingeniería y Ciencia de los Computadores, Universitat Jaume I, Castellón (Spain)

  • Venue:
  • Proceedings of the 2006 conference on Artificial Intelligence Research and Development
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Up to now, the Simultaneous Localization and Map Building problem for autonomous navigation has been solved by using quantitative (probabilistic) approaches at a high computational cost and at low level of abstraction. Our interest is to use hybrid (qualitative + quantitative) representation and reasoning models to fulfil these drawbacks. In this paper we present a novel cognitive vision system to capture information from the environment for map building. The cognitive vision module is based on a qualitative 3D model which generates the qualitative textures, through a temporal Gabor transform, which the robot should be seeing starting from the qualitative-quantitative map of the environment, and compares them with the real textures as seen from the robot camera. Different hypothesis are generated to explain these differences, which can be classified in errors of the textures obtained from the camera, which are ignored, and errors in the hybrid map of the environment, which are used to propose modifications of this map. The advantages of our model are: (1) we achieve a high degree of tolerance against visual recognition errors, because the input from the camera is filtered by the qualitative image generated by our model; (2) the continuous matching of the qualitative images generated by the model with the images obtained for the camera allows us to understand the video sequence seen for the robot, offering the qualitative model as the cognitive interpretation of the scene; (3) the information of the cameras is never directly used to control the robot, but only once it has been interpreted as a meaningful modification of the current hybrid map, allowing sensor independence. We have implemented the cognitive vision module to solve the Simultaneous Localization and Map Building problem for autonomous navigation of a Sony AIBO four legged robot on an unknown labyrinth, made of rectangular walls with a homogeneous but unknown texture on a floor with a different texture.