Transfer Learning in Reinforcement Learning Problems Through Partial Policy Recycling

  • Authors:
  • Jan Ramon;Kurt Driessens;Tom Croonenborghs

  • Affiliations:
  • K.U. Leuven, Dept. of Computer Science, Celestijnenlaan 200A, B-3001 Leuven,;K.U. Leuven, Dept. of Computer Science, Celestijnenlaan 200A, B-3001 Leuven,;K.U. Leuven, Dept. of Computer Science, Celestijnenlaan 200A, B-3001 Leuven,

  • Venue:
  • ECML '07 Proceedings of the 18th European conference on Machine Learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate the relation between transfer learning in reinforcement learning with function approximation and supervised learning with concept drift. We present a new incremental relational regression tree algorithm that is capable of dealing with concept drift through tree restructuring and show that it enables a Q-learner to transfer knowledge from one task to another by recycling those parts of the generalized Q-function that still hold interesting information for the new task. We illustrate the performance of the algorithm in several experiments.