Adaptive homing: robotic exploration tours

  • Authors:
  • Verena Vanessa Hafner

  • Affiliations:
  • Artificial Intelligence Lab, Department of Information Technology, University of Zurich

  • Venue:
  • Adaptive Behavior
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, a minimalistic model for learning and adaptation of visual homing is presented. Normalized Hebbian learning is used during exploration tours of a mobile robot to learn visual homing and to adapt to the sensory modalities. The sensors of the mobile robot (omnidirectional camera, magnetic compass) have been chosen in a way that their data most closely resemble the sensory data at the disposal of insects such as the desert ant Cataglyphis (almost omnidirectional vision, polarized light compass), which is an amazing navigator despite its tiny brain. The learned homing mechanism turned out to be closely related to Lambrinos and colleagues' average landmark vector (ALV) model and is widely independent of any special features of the environment. In contrast to the ALV model or other models of visual homing, feature extraction or landmark segmentation is not necessary. Mobile robot experiments have been performed in an unmodified office environment to test the feasibility of learning of visual homing.