Structural abstraction experiments in reinforcement learning

  • Authors:
  • Robert Fitch;Bernhard Hengst;Dorian Šuc;Greg Calbert;Jason Scholz

  • Affiliations:
  • National ICT Australia, University of NSW, Australia;National ICT Australia, University of NSW, Australia;National ICT Australia, University of NSW, Australia;Defence Science and Technology Organization, Salisbury, SA, Australia;Defence Science and Technology Organization, Salisbury, SA, Australia

  • Venue:
  • AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

A challenge in applying reinforcement learning to large problems is how to manage the explosive increase in storage and time complexity. This is especially problematic in multi-agent systems, where the state space grows exponentially in the number of agents. Function approximation based on simple supervised learning is unlikely to scale to complex domains on its own, but structural abstraction that exploits system properties and problem representations shows more promise. In this paper, we investigate several classes of known abstractions: 1) symmetry, 2) decomposition into multiple agents, 3) hierarchical decomposition, and 4) sequential execution. We compare memory requirements, learning time, and solution quality empirically in two problem variations. Our results indicate that the most effective solutions come from combinations of structural abstractions, and encourage development of methods for automatic discovery in novel problem formulations.