Free-energy based reinforcement learning for vision-based navigation with high-dimensional sensory inputs

  • Authors:
  • Stefan Elfwing;Makoto Otsuka;Eiji Uchibe;Kenji Doya

  • Affiliations:
  • Okinawa Institute of Science and Technology, Kunigami, Okinawa, Japan;Okinawa Institute of Science and Technology, Kunigami, Okinawa, Japan;Okinawa Institute of Science and Technology, Kunigami, Okinawa, Japan;Okinawa Institute of Science and Technology, Kunigami, Okinawa, Japan

  • Venue:
  • ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Free-energy based reinforcement learning was proposed for learning in high-dimensional state and action spaces, which cannot be handled by standard function approximation methods in reinforcement learning. In the free-energy reinforcement learning method, the actionvalue function is approximated as the negative free energy of a restricted Boltzmann machine. In this paper, we test if it is feasible to use freeenergy reinforcement learning for real robot control with raw, highdimensional sensory inputs through the extraction of task-relevant features in the hidden layer. We first demonstrate, in simulation, that a small mobile robot could efficiently learn a vision-based navigation and battery capturing task. We then demonstrate, for a simpler battery capturing task, that free-energy reinforcement learning can be used for online learning in a real robot. The analysis of learned weights showed that action-oriented state coding was achieved in the hidden layer.