Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Efficient evolution of neural networks through complexification
Efficient evolution of neural networks through complexification
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
Supervised and Evolutionary Learning of Echo State Networks
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Learning and multiagent reasoning for autonomous agents
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments
The Journal of Machine Learning Research
Unsupervised learning of echo state networks: a case study in artificial embryogeny
EA'07 Proceedings of the Evolution artificielle, 8th international conference on Artificial evolution
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
Reinforcement learning with echo state networks
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Transferring evolved reservoir features in reinforcement learning tasks
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Hi-index | 0.00 |
The Reinforcement Learning (RL) paradigm is an appropriate formulation for agent, goal-directed, sequential decision making. In order though for RL methods to perform well in difficult, complex, real-world tasks, the choice and the architecture of an appropriate function approximator is of crucial importance. This work presents a method of automatically discovering such function approximators, based on a synergy of ideas and techniques that are proven to be working on their own. Using Echo State Networks (ESNs) as our function approximators of choice, we try to adapt them, by combining evolution and learning, for developing the appropriate ad-hoc architectures to solve the problem at hand. The choice of ESNs was made for their ability to handle both non-linear and non-Markovian tasks, while also being capable of learning online, through simple gradient descent temporal difference learning. For creating networks that enable efficient learning, a neuroevolution procedure was applied. Appropriate topologies and weights were acquired by applying the NeuroEvolution of Augmented Topologies (NEAT) method as a meta-search algorithm and by adapting ideas like historical markings, complexification and speciation, to the specifics of ESNs. Our methodology is tested on both supervised and reinforcement learning testbeds with promising results.