How active vision facilitates familiarity-based homing

  • Authors:
  • Andrew Philippides;Alex Dewar;Antoine Wystrach;Michael Mangan;Paul Graham

  • Affiliations:
  • Centre for Computational Neuroscience and Robotics, University of Sussex, UK;Centre for Computational Neuroscience and Robotics, University of Sussex, UK;Centre for Computational Neuroscience and Robotics, University of Sussex, UK;School of Informatics, University of Edinburgh, UK;School of Informatics, University of Edinburgh, UK

  • Venue:
  • Living Machines'13 Proceedings of the Second international conference on Biomimetic and Biohybrid Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability of insects to visually navigate long routes to their nest has provided inspiration to engineers seeking to emulate their robust performance with limited resources [1-2]. Many models have been developed based on the elegant snapshot idea: remember what the world looks like from your goal and subsequently move to make your current view more like your memory [3]. In the majority of these models, a single view is stored at a goal location and acts as a form of visual attractor to that position (for review see [4]). Recently however, inspired by the behaviour of ants and the difficulties in extending traditional snapshot models to routes [5], we have proposed a new navigation model [6-7]. In this model, rather than using views to recall directions to the place that they were stored, views are used to recall the direction of facing or movement (identical for a forward-facing ant) at the place the view was stored. To navigate, the agent scans the world by rotating and thus actively finds the most familiar view, a behavior observed in Australian desert ants. Rather than recognise a place, the action to take at that place is specified by a familiar view.