1994 Special Issue: Mobile robot visual mapping and localization: A view-based neurocomputational architecture that emulates hippocampal place learning

  • Authors:
  • Ivan A. Bachelder;Allen M. Waxman

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks - Special issue: models of neurodynamics and behavior
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a real-time, view-based neurocomputational architecture for unsupervised 2-D mapping and localization within a 3-D environment defined by a spatially distributed set of visual landmarks. This architecture emulates place learning by hippocampal place cells in rats, and draws from anatomy of the primate object (''What'') and spatial (''Where'') processing streams. It extends by analogy, principles for learning characteristic views of 3-D objects (i.e., ''aspects''), to learning characteristic views of environments (i.e., ''places''). Places are defined by the identities and approximate poses (the What) of landmarks, as provided by visible landmark aspects. They are also defined by prototypical locations (the Where) within the landmark constellation, as indicated by the panoramic spatial distribution of landmark gaze directions. Combining these object and spatial definitions results in place nodes whose activity profiles define decision boundaries that parcel a 2-D area of the environment into place regions. These profiles resemble the spatial firing patterns over hippocampal place fields observed in rat experiments. A realtime demonstration of these capabilities on the binocular mobile robot MAVIN (the mobile adaptive visual navigator) illustrates the potential of this approach for qualitative mapping and fine localization.