Evolving an autonomous agent for non-Markovian reinforcement learning

  • Authors:
  • Jae-Yoon Jung;James A. Reggia

  • Affiliations:
  • Queen's University, Kingston, ON, Canada;University of Maryland, College Park, MD, USA

  • Venue:
  • Proceedings of the 11th Annual conference on Genetic and evolutionary computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we investigate the use of nested evolution in which each step of one evolutionary process involves running a second evolutionary process. We apply this approach to build an evolutionary system for reinforcement learning (RL) problems. Genetic programming based on a descriptive encoding is used to evolve the neural architecture, while an evolution strategy is used to evolve the connection weights. We test this method on a non-Markovian RL problem involving an autonomous foraging agent, finding that the evolved networks significantly outperform a rule-based agent serving as a control. We also demonstrate that nested evolution, partitioning into subpopulations, and crossover operations all act synergistically in improving performance in this context.