Genetic Reinforcement Learning for Neurocontrol Problems
Machine Learning - Special issue on genetic algorithms
TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Efficient reinforcement learning through symbiotic evolution
Machine Learning - Special issue on reinforcement learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Evolving neural networks through augmenting topologies
Evolutionary Computation
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Systematically incorporating domain-specific knowledge into evolutionary speciated checkers players
IEEE Transactions on Evolutionary Computation
Reinforcement learning with n-tuples on the game connect-4
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Safe exploration of state and action spaces in reinforcement learning
Journal of Artificial Intelligence Research
EvoApplications'13 Proceedings of the 16th European conference on Applications of Evolutionary Computation
Hi-index | 0.00 |
We apply CMA-ES, an evolution strategy with covariance matrix adaptation, and TDL (Temporal Difference Learning) to reinforcement learning tasks. In both cases these algorithms seek to optimize a neural network which provides the policy for playing a simple game (TicTacToe). Our contribution is to study the effect of varying learning conditions on learning speed and quality. Certain initial failures with wrong fitness functions lead to the development of new fitness functions, which allow fast learning. These new fitness functions in combination with CMA-ES reduce the number of required games needed for training to the same order of magnitude as TDL. The selection of suitable features is also of critical importance for the learning success. It could be shown that using the raw board position as an input feature is not very effective -- and it is orders of magnitudes slower than different feature sets which exploit the symmetry of the game. We develop a measure "feature set utility", FU, which allows to characterize a given feature set in advance. We show that the lower bound provided by FU is largely in accordance with the results from our repeated experiments for very different learning algorithms, CMA-ES and TDL.