Reinforcement learning for games: failures and successes

  • Authors:
  • Wolfgang Konen;Thomas Bartz-Beielstein

  • Affiliations:
  • Cologne University of Applied Sciences, Gummersbach, Germany;Cologne University of Applied Sciences, Gummersbach, Germany

  • Venue:
  • Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We apply CMA-ES, an evolution strategy with covariance matrix adaptation, and TDL (Temporal Difference Learning) to reinforcement learning tasks. In both cases these algorithms seek to optimize a neural network which provides the policy for playing a simple game (TicTacToe). Our contribution is to study the effect of varying learning conditions on learning speed and quality. Certain initial failures with wrong fitness functions lead to the development of new fitness functions, which allow fast learning. These new fitness functions in combination with CMA-ES reduce the number of required games needed for training to the same order of magnitude as TDL. The selection of suitable features is also of critical importance for the learning success. It could be shown that using the raw board position as an input feature is not very effective -- and it is orders of magnitudes slower than different feature sets which exploit the symmetry of the game. We develop a measure "feature set utility", FU, which allows to characterize a given feature set in advance. We show that the lower bound provided by FU is largely in accordance with the results from our repeated experiments for very different learning algorithms, CMA-ES and TDL.