Fuzzy Q-Map Algorithm for Reinforcement Learning

  • Authors:
  • Youngah Lee;Seokmi Hong

  • Affiliations:
  • The Department of Computer Engineering The University of KyungHee, Seocheon-Dong Giheung-Gu Yongin-si Gyeonggi-Do, 446-701, Korea;School of Computer, Information and Communication Engineering The University of Sangji #660 USan-Dong WonJu-Si, KangWon-Do, 220-702, Korea

  • Venue:
  • Computational Intelligence and Security
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In reinforcement learning, it is important to get nearly right answers early. Good prediction early can reduce the prediction error afterward and accelerate learning speed. We propose Fuzzy Q-Map, function approximation algorithm based on on-line fuzzy clustering in order to accelerate learning. Fuzzy Q-Map can handle the uncertainty owing to the absence of environment model. Appling membership function to reinforcement learning can reduce the prediction error and destructive interference phenomenon caused by changes of the distribution of training data. In order to evaluate fuzzy Q-Map's performance, we experimented on the mountain car problem and compared it with CMAC. CMAC achieves the prediction rate 80% from 250 training data, Fuzzy Q-Map learns faster and keep up the prediction rate 80% from 250 training data. Fuzzy Q-Map may be applied to the field of simulation that has uncertainty and complexity.