Image invariant robot navigation based on self organising neural place codes

  • Authors:
  • Kaustubh Chokshi;Stefan Wermter;Christo Panchev;Kevin Burn

  • Affiliations:
  • Centre for Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, United Kingdom;Centre for Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, United Kingdom;Centre for Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, United Kingdom;Centre for Hybrid Intelligent Systems, School of Computing and Technology, University of Sunderland, Sunderland, United Kingdom

  • Venue:
  • Biomimetic Neural Learning for Intelligent Robots
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

For a robot to be autonomous it must be able to navigate independently within an environment. The overall aim of this paper is to show that localisation can be performed even without having a pre-defined map given to the robot by humans. In nature place cells are brain cells that respond to the environment the animal is in. In this paper we present a model of place cells based on Self Organising Maps. We also show how image invariance can improve the performance of the place cells and make the model more robust to noise. The incoming visual stimuli are interpreted by means of neural networks and they respond only to a specific combination of visual landmarks. The activities of these neural networks implicitly represent environmental properties like distance and orientation to the visual cues. Unsupervised learning is used to build the computational model of hippocampal place cells. After training, a robot can localise itself within a learned environment.