Dynamic Self-Generated Fuzzy Systems for Reinforcement Learning

  • Authors:
  • Meng Joo Er;Yi Zhou

  • Affiliations:
  • Intelligent Systems Center 50 Nanyang Drive, BoarderX Block, Singapore;Intelligent Systems Center 50 Nanyang Drive, BoarderX Block, Singapore

  • Venue:
  • CIMCA '05 Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce Vol-1 (CIMCA-IAWTIC'06) - Volume 01
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel methodology for generating fuzzy reinforcement learning systems without a prior knowledge and expert effect named as Dynamic Self-Generated Fuzzy Qlearning (DSGFQL) has been proposed in this paper. Compared with authors' previous work on Dynamic Fuzzy Qlearning (DFQL), DSGFQL offers an automatical generation method for fuzzy reinforcement learning with capabilities of creating as well as pruning fuzzy rules. Similar as the DFQL, å-completeness criterion is applied for recruiting new fuzzy rules. At the same time, global and local reward criterions are adopted for parameters modification for fuzzy rules which pass the å-completeness criterion. In DSGFQL, local reward and local firing strength have been utilized for deleting unsatisfactory and unnecessary fuzzy rules. In this paper, DSGFQL has been applied for a wallfollowing task of a mobile robot. Experiment results and comparative studies between the novel DSGFQL and DFQL demonstrate that the proposed DSGFQL is superior to the DFQL in both overall performance and computational efficiency as the number of failures is fewer, the reward is bigger and the number of fuzzy rules is smaller. Moreover, the proposed framework can be applied of generating fuzzy inference systems (FIS) automatically for other reinforcement learning methods as well.