The theory of evolution strategies
The theory of evolution strategies
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
Evolving neural networks through augmenting topologies
Evolutionary Computation
Genetic Programming And Multi-agent Layered Learning By Reinforcements
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
On the local performance of simulated annealing and the (1+1) evolutionary algorithm
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Comparing evolutionary and temporal difference methods in a reinforcement learning domain
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Learning tetris using the noisy cross-entropy method
Neural Computation
Empirical Studies in Action Selection with Reinforcement Learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Evolutionary reinforcement learning of artificial neural networks
International Journal of Hybrid Intelligent Systems - Hybridization of Intelligent Systems
Nested evolution of an autonomous agent using descriptive encoding
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Analysis of an evolutionary reinforcement learning method in a multiagent domain
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Evolutionary Design of Neural Network Architectures Using a Descriptive Encoding Language
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
In this paper, we investigate the use of nested evolution in which each step of one evolutionary process involves running a second evolutionary process. We apply this approach to build an evolutionary system for reinforcement learning (RL) problems. Genetic programming based on a descriptive encoding is used to evolve the neural architecture, while an evolution strategy is used to evolve the connection weights. We test this method on a non-Markovian RL problem involving an autonomous foraging agent, finding that the evolved networks significantly outperform a rule-based agent serving as a control. We also demonstrate that nested evolution, partitioning into subpopulations, and crossover operations all act synergistically in improving performance in this context.