A course in fuzzy systems and control
A course in fuzzy systems and control
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
A New Approach to Fuzzy Classifier Systems
Proceedings of the 5th International Conference on Genetic Algorithms
Fuzzy and Crisp Representations of Real-Valued Input for Learning Classifier Systems
Learning Classifier Systems, From Foundations to Applications
Get Real! XCS with Continuous-Valued Inputs
Learning Classifier Systems, From Foundations to Applications
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
For real! XCS with continuous-valued inputs
Evolutionary Computation
Be real! XCS with continuous-valued inputs
GECCO '05 Proceedings of the 7th annual workshop on Genetic and evolutionary computation
XCSF with computed continuous action
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Classifier fitness based on accuracy
Evolutionary Computation
Three architectures for continuous action
IWLCS'03-05 Proceedings of the 2003-2005 international conference on Learning classifier systems
Fuzzy dynamical genetic programming in XCSF
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Dynamical genetic programming in xcsf
Evolutionary Computation
Hi-index | 0.00 |
This paper introduces the QFCS, a new approach to fuzzy learning classifier systems. QFCS can solve the multistep reinforcement learning problem in continuous environments and with a set of continuous vector actions. Rules in the QFCS are small fuzzy systems. QFCS uses a Q-learning algorithm to learn the mapping between inputs and outputs. This paper presents results that show that QFCS can evolve rules to represent only those parts of the input and action space where the expected values are important for making decisions. Results for the QFCS are compared with those obtained by Q-learning with a high discretization to show that the new approach converges in a way similar to how Q-learning does for one-dimension problems with an optimal solution, and for two dimensions QFCS learns suboptimal solutions while it is difficult for Q-learning to converge due to that high discretization.