Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
Averaging Efficiently in the Presence of Noise
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Keepaway Soccer: A Machine Learning Testbed
RoboCup 2001: Robot Soccer World Cup V
Efficient evolution of neural networks through complexification
Efficient evolution of neural networks through complexification
Comparing evolutionary and temporal difference methods in a reinforcement learning domain
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Evolutionary Function Approximation for Reinforcement Learning
The Journal of Machine Learning Research
Empirical Studies in Action Selection with Reinforcement Learning
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
A common genetic encoding for both direct and indirect encodings of networks
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Analysis of an evolutionary reinforcement learning method in a multiagent domain
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Efficient non-linear control through neuroevolution
ECML'06 Proceedings of the 17th European conference on Machine Learning
Real-time neuroevolution in the NERO video game
IEEE Transactions on Evolutionary Computation
Hi-index | 0.00 |
For many complex Reinforcement Learning problems with large and continuous state spaces, neuroevolution (the evolution of artificial neural networks) has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in offline learning settings, where the training and evaluation phase of the system are separated. In contrast, in online Reinforcement Learning tasks where the actual performance of the systems during its learning phase matters, the results of neuroevolution are significantly impaired by its purely exploratory nature, meaning that it does not use (i. e. exploit) its knowledge of the performance of single individuals in order to improve its performance during learning. In this paper we describe modifications which significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies (EANT) and discuss the results obtained on two benchmark problems.