Reinforcement learning transfer via common subspaces

  • Authors:
  • Haitham Bou Ammar;Matthew E. Taylor

  • Affiliations:
  • Department of Knowledge Engineering, Maastricht University, Netherlands;Department of Computer Science, Lafayette College

  • Venue:
  • ALA'11 Proceedings of the 11th international conference on Adaptive and Learning Agents
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Agents in reinforcement learning tasks may learn slowly in large or complex tasks -- transfer learning is one technique to speed up learning by providing an informative prior. How to best enable transfer between tasks with different state representations and/or actions is currently an open question. This paper introduces the concept of a common task subspace, which is used to autonomously learn how two tasks are related. Experiments in two different nonlinear domains empirically show that a learned inter-state mapping can successfully be used by fitted value iteration, to (1) improving the performance of a policy learned with a fixed number of samples, and (2) reducing the time required to converge to a (near-) optimal policy with unlimited samples.