Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Technical Note: \cal Q-Learning
Machine Learning
An optimization-based categorization of reinforcement learning environments
Proceedings of the second international conference on From animals to animats 2 : simulation of adaptive behavior: simulation of adaptive behavior
Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Adding temporary memory to ZCS
Adaptive Behavior
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The anticipatory classifier system and genetic generalization
Natural Computing: an international journal
Evolutionary Computation
Lookahead And Latent Learning In ZCS
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
An abstraction agorithm for genetics-based reinforcement learning
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Toward Optimal Classifier System Performance in Non-Markov Environments
Evolutionary Computation
Zcs: A zeroth level classifier system
Evolutionary Computation
Classifier fitness based on accuracy
Evolutionary Computation
A learning classifier system for mazes with aliasing clones
Natural Computing: an international journal
Designing efficient exploration with MACS: modules and function approximation
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartII
Toward a theory of generalization and learning in XCS
IEEE Transactions on Evolutionary Computation
Feedback of Delayed Rewards in XCS for Environments with Aliasing States
ACAL '09 Proceedings of the 4th Australian Conference on Artificial Life: Borrowing from Biology
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Measuring universal intelligence: Towards an anytime intelligence test
Artificial Intelligence
Evaluating a reinforcement learning algorithm with a general intelligence test
CAEPIA'11 Proceedings of the 14th international conference on Advances in artificial intelligence: spanish association for artificial intelligence
Hi-index | 0.00 |
Learning classifier systems (LCSs) belong to a class of algorithms based on the principle of self-organization and have frequently been applied to the task of solving mazes, an important type of reinforcement learning (RL) problem. Maze problems represent a simplified virtual model of real environments that can be used for developing core algorithms of many real-world applications related to the problem of navigation. However, the best achievements of LCSs in maze problems are still mostly bounded to non-aliasing environments, while LCS complexity seems to obstruct a proper analysis of the reasons of failure. We construct a new LCS agent that has a simpler and more transparent performance mechanism, but that can still solve mazes better than existing algorithms. We use the structure of a predictive LCS model, strip out the evolutionary mechanism, simplify the reinforcement learning procedure and equip the agent with the ability of associative perception, adopted from psychology. To improve our understanding of the nature and structure of maze environments, we analyze mazes used in research for the last two decades, introduce a set of maze complexity characteristics, and develop a set of new maze environments. We then run our new LCS with associative perception through the old and new aliasing mazes, which represent partially observable Markov decision problems (POMDP) and demonstrate that it performs at least as well as, and in some cases better than, other published systems.