Multi-task reinforcement learning: a hierarchical Bayesian approach
Proceedings of the 24th international conference on Machine learning
Transfer via inter-task mappings in policy search reinforcement learning
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Transferring Instances for Model-Based Reinforcement Learning
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Using Homomorphisms to transfer options across continuous reinforcement learning domains
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Value functions for RL-based behavior transfer: a comparative study
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Mapping and revising Markov logic networks for transfer learning
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Transfer learning via dimensionality reduction
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this paper we introduce a budgeted knowledge transfer algorithm for non-homogeneous reinforcement learning agents. Here the source and the target agents are completely identical except in their state representations. The algorithm uses functional space (Q-value space) as the transfer-learning media. In this method, the target agent's functional points (Q-values) are estimated in an automatically selected lower-dimension subspace in order to accelerate knowledge transfer. The target agent searches that subspace using an exploration policy and selects actions accordingly during the period of its knowledge transfer in order to facilitate gaining an appropriate estimate of its Q-table. We show both analytically and empirically that this method decreases the required learning budget for the target agent.