Experiments with Adaptive Transfer Rate in Reinforcement Learning

  • Authors:
  • Yann Chevaleyre;Aydano Machado Pamponet;Jean-Daniel Zucker

  • Affiliations:
  • Universit Paris-Dauphine,;Universit Paris 6,;IRD UR Godes Centre IRD de l'Ile de France, Bondy, France

  • Venue:
  • Knowledge Acquisition: Approaches, Algorithms and Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Transfer algorithms allow the use of knowledge previously learned on related tasks to speed-up learning of the current task. Recently, many complex reinforcement learning problems have been successfully solved by efficient transfer learners. However, most of these algorithms suffer from a severe flaw: they are implicitly tuned to transfer knowledge between tasks having a given degree of similarity. In other words, if the previous task is very dissimilar (resp. nearly identical) to the current task, then the transfer process might slow down the learning (resp. might be far from optimal speed-up). In this paper, we address this specific issue by explicitly optimizing the transfer rate between tasks and answer to the question : "can the transfer rate be accurately optimized, and at what cost ?". We show that this optimization problem is related to the continuum bandit problem. We then propose a generic adaptive transfer method (AdaTran), which allows to extend several existing transfer learning algorithms to optimize the transfer rate. Finally, we run several experiments validating our approach.