Abstracting Visual Percepts to Learn Concepts

  • Authors:
  • Jean-Daniel Zucker;Nicolas Bredeche;Lorenza Saitta

  • Affiliations:
  • -;-;-

  • Venue:
  • Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

To efficiently identify properties from its environment is an essential ability of a mobile robot who needs to interact with humans. Successful approaches to provide robots with such ability are based on ad-hoc perceptual representation provided by AI designers. Instead, our goal is to endow autonomous mobile robots (in our experiments a Pioneer 2DX) with a perceptual system that can efficiently adapt itself to ease the learning task required to anchor symbols. Our approach is in the line of meta-learning algorithms that iteratively change representations so as to discover one that is well fitted for the task. The architecture we propose may be seen as a combination of the two widely used approach in feature selection: the Wrapper-model and the Filter-model. Experiments using the PLIC system to identify the presence of Humans and Fire Extinguishers show the interest of such an approach, which dynamically abstracts a well fitted image description depending on the concept to learn.