Transferring experience in reinforcement learning through task decomposition

  • Authors:
  • Ioannis Partalas;Grigorios Tsoumakas;Konstantinos Tzevanidis;Ioannis Vlahavas

  • Affiliations:
  • Aristotle University of Thessaloniki, Greece;Aristotle University of Thessaloniki, Greece;Aristotle University of Thessaloniki, Greece;Aristotle University of Thessaloniki, Greece

  • Venue:
  • Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transfer learning refers to the process of conveying experience from a simple task to another more complex (and related) task in order to reduce the amount of time that is required to learn the latter task. Typically, in a transfer learning procedure the agent learns a behavior in a source task, and it uses the gained knowledge in order to speed up the learning process in a target task. Reinforcement Learning algorithms are time expensive when they learn from scratch, especially in complex domains, and transfer learning comprises a suitable solution to speed up the training process. In this work we propose a method that decomposes the target task in several instances of the source task and uses them to extract an advised action for the target task. We evaluate the efficacy of the proposed approach in the robotic soccer Keepaway domain. The results demonstrate that the proposed method helps to reduce the training time of the target task.