Technical Note: \cal Q-Learning
Machine Learning
Genetic Reinforcement Learning for Neurocontrol Problems
Machine Learning - Special issue on genetic algorithms
Efficient reinforcement learning through symbiotic evolution
Machine Learning - Special issue on reinforcement learning
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Noisy Local Optimization with Evolution Strategies
Noisy Local Optimization with Evolution Strategies
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
Evolving Neural Control Systems
IEEE Expert: Intelligent Systems and Their Applications
Evolving neural networks through augmenting topologies
Evolutionary Computation
Guessing can Outperform Many Long Time Lag Algorithms
Guessing can Outperform Many Long Time Lag Algorithms
Making Driver Modeling Attractive
IEEE Intelligent Systems
Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains
Theoretical Computer Science
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms
Theoretical Computer Science - Foundations of genetic algorithms
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
Neurocomputing
Evolutionary reinforcement learning of artificial neural networks
International Journal of Hybrid Intelligent Systems - Hybridization of Intelligent Systems
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A comparison between cellular encoding and direct encoding for genetic neural networks
GECCO '96 Proceedings of the 1st annual conference on Genetic and evolutionary computation
Solving non-Markovian control tasks with neuroevolution
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Solving deep memory POMDPs with recurrent policy gradients
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
Expert Systems with Applications: An International Journal
CMA-TWEANN: efficient optimization of neural networks via self-adaptation and seamless augmentation
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Efficient neuroevolution for a quadruped robot
SEAL'12 Proceedings of the 9th international conference on Simulated Evolution and Learning
Neural Networks
Fast damage recovery in robotics with the T-resilience algorithm
International Journal of Robotics Research
Hi-index | 0.00 |
Because of their convincing performance, there is a growing interest in using evolutionary algorithms for reinforcement learning. We propose learning of neural network policies by the covariance matrix adaptation evolution strategy (CMA-ES), a randomized variable-metric search algorithm for continuous optimization. We argue that this approach, which we refer to as CMA Neuroevolution Strategy (CMA-NeuroES), is ideally suited for reinforcement learning, in particular because it is based on ranking policies (and therefore robust against noise), efficiently detects correlations between parameters, and infers a search direction from scalar reinforcement signals. We evaluate the CMA-NeuroES on five different (Markovian and non-Markovian) variants of the common pole balancing problem. The results are compared to those described in a recent study covering several RL algorithms, and the CMA-NeuroES shows the overall best performance.