Using ILP to Improve Planning in Hierarchical Reinforcement Learning

  • Authors:
  • Mark Reid;Malcolm R. K. Ryan

  • Affiliations:
  • -;-

  • Venue:
  • ILP '00 Proceedings of the 10th International Conference on Inductive Logic Programming
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hierarchical reinforcement learning has been proposed as a solution to the problem of scaling up reinforcement learning. The RLTOPs Hierarchical Reinforcement Learning System is an implementation of this proposal which structures an agent's sensors and actions into various levels of representation and control. Disparity between levels of representation means actions can be misused by the planning algorithm in the system. This paper reports on how ILP was used to bridge these representation gaps and shows empirically how this improved the system's performance. Also discussed are some of the problems encountered when using an ILP system in what is inherently a noisy and incremental domain.