Transfer learning via multiple inter-task mappings

  • Authors:
  • Anestis Fachantidis;Ioannis Partalas;Matthew E. Taylor;Ioannis Vlahavas

  • Affiliations:
  • Department of Informatics, Aristotle University of Thessaloniki, Greece;Department of Informatics, Aristotle University of Thessaloniki, Greece;Department of Computer Science, Lafayette College;Department of Informatics, Aristotle University of Thessaloniki, Greece

  • Venue:
  • EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we investigate using multiple mappings for transfer learning in reinforcement learning tasks. We propose two different transfer learning algorithms that are able to manipulate multiple inter-task mappings for both model-learning and model-free reinforcement learning algorithms. Both algorithms incorporate mechanisms to select the appropriate mappings, helping to avoid the phenomenon of negative transfer. The proposed algorithms are evaluated in the Mountain Car and Keepaway domains. Experimental results show that the use of multiple inter-task mappings can significantly boost the performance of transfer learning methodologies, relative to using a single mapping or learning without transfer.