Transferring Instances for Model-Based Reinforcement Learning

  • Authors:
  • Matthew E. Taylor;Nicholas K. Jong;Peter Stone

  • Affiliations:
  • Department of Computer Sciences, The University of Texas at Austin,;Department of Computer Sciences, The University of Texas at Austin,;Department of Computer Sciences, The University of Texas at Austin,

  • Venue:
  • ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learningagents typically require a significant amount of data before performing well on complex tasks. Transfer learningmethods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel, a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrelcan significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel's effectiveness.