QFCS: A Fuzzy LCS in Continuous Multi-step Environments with Continuous Vector Actions

  • Authors:
  • José Ramírez-Ruiz;Manuel Valenzuela-Rendón;Hugo Terashima-Marín

  • Affiliations:
  • Center for Intelligent Systems, Tecnológico de Monterrey, Monterrey, N.L., Mexico 64849;Center for Intelligent Systems, Tecnológico de Monterrey, Monterrey, N.L., Mexico 64849;Center for Intelligent Systems, Tecnológico de Monterrey, Monterrey, N.L., Mexico 64849

  • Venue:
  • Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces the QFCS, a new approach to fuzzy learning classifier systems. QFCS can solve the multistep reinforcement learning problem in continuous environments and with a set of continuous vector actions. Rules in the QFCS are small fuzzy systems. QFCS uses a Q-learning algorithm to learn the mapping between inputs and outputs. This paper presents results that show that QFCS can evolve rules to represent only those parts of the input and action space where the expected values are important for making decisions. Results for the QFCS are compared with those obtained by Q-learning with a high discretization to show that the new approach converges in a way similar to how Q-learning does for one-dimension problems with an optimal solution, and for two dimensions QFCS learns suboptimal solutions while it is difficult for Q-learning to converge due to that high discretization.