Exploration and exploitation balance management in fuzzy reinforcement learning

  • Authors:
  • Vali Derhami;Vahid Johari Majd;Majid Nili Ahmadabadi

  • Affiliations:
  • Intelligent Control Systems Laboratory, School of Electrical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran, Iran;Intelligent Control Systems Laboratory, School of Electrical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran, Iran;Control and Intelligent Processing Center of Excellence, University of Tehran, Tehran, Iran and School of Cognitive Science, Institute for Research in Fundamental Sciences, Tehran, Iran

  • Venue:
  • Fuzzy Sets and Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.20

Visualization

Abstract

This paper offers a fuzzy balance management scheme between exploration and exploitation, which can be implemented in any critic-only fuzzy reinforcement learning method. The paper, however, focuses on a newly developed continuous reinforcement learning method, called fuzzy Sarsa learning (FSL) due to its advantages. Establishing balance greatly depends on the accuracy of action value function approximation. At first, the overfitting problem in approximating action value function in continuous reinforcement learning algorithms is discussed, and a new adaptive learning rate is proposed to prevent this problem. By relating the learning rate to the inverse of ''fuzzy visit value'' of the current state, the training data set is forced to have uniform effect on the weight parameters of the approximator and hence overfitting is resolved. Then, a fuzzy balancer is introduced to balance exploration vs. exploitation by generating a suitable temperature factor for the Softmax formula. Finally, an enhanced FSL (EFSL) is offered by integrating the proposed adaptive learning rate and the fuzzy balancer into FSL. Simulation results show that EFSL eliminates overfitting, well manages balance, and outperforms FSL in terms of learning speed and action quality.