Learning efficient policies for vision-based navigation

  • Authors:
  • Armin Hornung;Hauke Strasdat;Maren Bennewitz;Wolfram Burgard

  • Affiliations:
  • Department of Computer Science, University of Freiburg, Germany;Department of Computing, Imperial College London, UK;Department of Computer Science, University of Freiburg, Germany;Department of Computer Science, University of Freiburg, Germany

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cameras are popular sensors for robot navigation tasks such as localization as they are inexpensive, lightweight, and provide rich data. However, fast movements of a mobile robot typically reduce the performance of vision-based localization systems due to motion blur. In this paper, we present a reinforcement learning approach to choose appropriate velocity profiles for vision-based navigation. The learned policy minimizes the time to reach the destination and implicitly takes the impact of motion blur on observations into account. To reduce the size of the resulting policies, which is desirable in the context of memory-constrained systems, we compress the learned policy via a clustering approach. Extensive simulated and real-world experiments demonstrate that our learned policy significantly outperforms any policy that uses a constant velocity. We furthermore show, that our policy is applicable to different environments. Additional experiments demonstrate that our compressed policies do not result in a performance loss compared to the originally learned policy.