Efficient Reinforcement Learning Through Evolving Neural Network Topologies
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Training Recurrent Networks by Evolino
Neural Computation
Compositional pattern producing networks: A novel abstraction of development
Genetic Programming and Evolvable Machines
Robust multi-cellular developmental design
Proceedings of the 9th annual conference on Genetic and evolutionary computation
A comparison between cellular encoding and direct encoding for genetic neural networks
GECCO '96 Proceedings of the 1st annual conference on Genetic and evolutionary computation
Solving non-Markovian control tasks with neuroevolution
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Neuroevolution with analog genetic encoding
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
Time Series Prediction with Evolved, Composite Echo State Networks
SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
A NEAT Way for Evolving Echo State Networks
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
Adaptive reservoir computing through evolution and learning
Neurocomputing
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
A possible alternative to topology fine-tuning for Neural Network (NN) optimization is to use Echo State Networks (ESNs), recurrent NNs built upon a large reservoir of sparsely randomly connected neurons. The promises of ESNs have been fulfilled for supervised learning tasks, but unsupervised ones, e.g. control problems, require more flexible optimization methods --- such as Evolutionary Algorithms. This paper proposes to apply CMA-ES, the state-of-the-art method in evolutionary continuous parameter optimization, to the evolutionary learning of ESN parameters. First, a standard supervised learning problem is used to validate the approach and compare it to the standard one. But the flexibility of Evolutionary optimization allows us to optimize not only the outgoing weights but also, or alternatively, other ESN parameters, sometimes leading to improved results. The classical double pole balancing control problem is then used to demonstrate the feasibility of evolutionary (i.e. reinforcement) learning of ESNs. We show that the evolutionary ESN obtain results that are comparable with those of the best topology-learning methods.