Continuous-state reinforcement learning with fuzzy approximation

  • Authors:
  • Lucian Buşoniu;Damien Ernst;Bart De Schutter;Robert Babuška

  • Affiliations:
  • Delft University of Technology, The Netherlands;Supélec, Rennes, France;Delft University of Technology, The Netherlands;Delft University of Technology, The Netherlands

  • Venue:
  • ALAMAS'05/ALAMAS'06/ALAMAS'07 Proceedings of the 5th , 6th and 7th European conference on Adaptive and learning agents and multi-agent systems: adaptation and multi-agent learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning (RL) is a widely used learning paradigm for adaptive agents. There exist several convergent and consistent RL algorithms which have been intensively studied. In their original form, these algorithms require that the environment states and agent actions take values in a relatively small discrete set. Fuzzy representations for approximate, model-free RL have been proposed in the literature for the more difficult case where the state-action space is continuous. In this work, we propose a fuzzy approximation architecture similar to those previously used for Q-learning, but we combine it with the model-based Q-value iteration algorithm. We prove that the resulting algorithm converges. We also give a modified, asynchronous variant of the algorithm that converges at least as fast as the original version. An illustrative simulation example is provided.