Augmenting appearance-based localization and navigation using belief update

  • Authors:
  • George Chrysanthakopoulos;Guy Shani

  • Affiliations:
  • Microsoft Research, Redmond, WA;Ben Gurion University, Beer Sheva, Israel

  • Venue:
  • Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 2 - Volume 2
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Appearance-based localization compares the current image taken from a robot's camera to a set of pre-recorded images in order to estimate the current location of the robot. Such techniques often maintain a graph of images, modeling the dynamics of the image sequence. This graph is used to navigate in the space of images. In this paper we bring a set of techniques together, including Partially-Observable Markov Decision Processes, hierarchical state representations, visual homing, human-robot interactions, and so forth, into the appearance-based approach. Our approach provides a complete solution to the deployment of a robot in a relatively small environment, such as a house, or a work place, allowing the robot to robustly navigate the environment after minimal training. We demonstrate our approach in two environments using a real robot, showing how after a short training session, the robot is able to navigate well in the environment.