Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty

  • Authors:
  • Michael G. Madden;Tom Howley

  • Affiliations:
  • Department of Information Technology, National University of Ireland, Galway/ E-mail: michael.madden@nuigalway.ie;Department of Information Technology, National University of Ireland, Galway

  • Venue:
  • Artificial Intelligence Review
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an extension to reinforcement learning (RL), in which a standard RL algorithm is augmented with a mechanism for transferring experience gained in one problem to new but related problems. In this approach, named Progressive RL, an agent acquires experience of operating in a simple environment through experimentation, and then engages in a period of introspection, during which it rationalises the experience gained and formulates symbolic knowledge describing how to behave in that simple environment. When subsequently experimenting in a more complex but related environment, it is guided by this knowledge until it gains direct experience. A test domain with 15 maze environments, arranged in order of difficulty, is described. A range of experiments in this domain are presented, that demonstrate the benefit of Progressive RL relative to a basic RL approach in which each puzzle is solved from scratch. The experiments also analyse the knowledge formed during introspection, illustrate how domain knowledge may be incorporated, and show that Progressive Reinforcement Learning may be used to solve complex puzzles more quickly.