Transferring task models in Reinforcement Learning agents

  • Authors:
  • Anestis Fachantidis;Ioannis Partalas;Grigorios Tsoumakas;Ioannis Vlahavas

  • Affiliations:
  • Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece;Laboratoire LIG, Université Joseph Fourier, 38041 Grenoble Cedex 9, France;Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece;Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

The main objective of transfer learning is to reuse knowledge acquired in a previous learned task, in order to enhance the learning procedure in a new and more complex task. Transfer learning comprises a suitable solution for speeding up the learning procedure in Reinforcement Learning tasks. This work proposes a novel method for transferring models to Reinforcement Learning agents. The models of the transition and reward functions of a source task, will be transferred to a relevant but different target task. The learning algorithm of the target task's agent takes a hybrid approach, implementing both model-free and model-based learning, in order to fully exploit the presence of a source task model. Moreover, a novel method is proposed for transferring models of potential-based reward shaping functions. The empirical evaluation, of the proposed approaches, demonstrated significant results and performance improvements in the 3D Mountain Car and Server Job Scheduling tasks, by successfully using the models generated from their corresponding source tasks.