Learning relational options for inductive transfer in relational reinforcement learning

  • Authors:
  • Tom Croonenborghs;Kurt Driessens;Maurice Bruynooghe

  • Affiliations:
  • K.U.Leuven, Dept. of Computer Science, Leuven;K.U.Leuven, Dept. of Computer Science, Leuven;K.U.Leuven, Dept. of Computer Science, Leuven

  • Venue:
  • ILP'07 Proceedings of the 17th international conference on Inductive logic programming
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In reinforcement learning problems, an agent has the task of learning a good or optimal strategy from interaction with his environment. At the start of the learning task, the agent usually has very little information. Therefore, when faced with complex problems that have a large state space, learning a good strategy might be infeasible or too slow to work in practice. One way to overcome this problem, is the use of guidance to supply the agent with traces of "reasonable policies". However, in a lot of cases it will be hard for the user to supply such a policy. In this paper, we will investigate the use of transfer learning in Relational Reinforcement Learning. The goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task. More specifically, we introduce an extension of the options framework to the relational setting and show how one can learn skills that can be transferred across similar, but different domains. We present experiments showing the possible benefits of using relational options for transfer learning.