Reinforcement learning transfer using a sparse coded inter-task mapping

  • Authors:
  • Haitham Bou Ammar;Matthew E. Taylor;Karl Tuyls;Gerhard Weiss

  • Affiliations:
  • Department of Knowledge Engineering, Maastricht University, The Netherlands;Department of Computer Science, Lafayette College;Department of Knowledge Engineering, Maastricht University, The Netherlands;Department of Knowledge Engineering, Maastricht University, The Netherlands

  • Venue:
  • EUMAS'11 Proceedings of the 9th European conference on Multi-Agent Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.