Designing neural control architectures for an autonomous robot using vision to solve complex learning tasks

  • Authors:
  • A. Revel;P. Gaussier

  • Affiliations:
  • Neurocybernetic Team - ETIS ENSEA/UCP 6, Avenue du Ponceau 95000 Cergy-Pontoise Cedex - France;Neurocybernetic Team - ETIS ENSEA/UCP 6, Avenue du Ponceau 95000 Cergy-Pontoise Cedex - France

  • Venue:
  • Biologically inspired robot behavior engineering
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this chapter, we intend to present a way to design Neural Network architectures to control an autonomous mobile robot using vision as the main sensor. We start discussing the notion of autonomy. In particular we show how it constrains the learning and the architecture of the control system. We propose a set of neural tools developed to solve problems linked with autonomous learning: the Perception-Action (PerAc) architecture, the Probabilistic Conditioning Rule (PCR) and a system allowing to plan actions (integrating a system for transition learning and prediction). Illustrations on different examples (visual homing, maze problem and planning) of how the tools that has been elaborated can be assembled to form a generic control system reusable for several tasks are presented along the description of the tools. Finally, future developments and the way to integrate these works in a general cognitive science framework will be discussed.