Transfer of samples in batch reinforcement learning
Proceedings of the 25th international conference on Machine learning
Autonomous transfer for reinforcement learning
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Transfer of task representation in reinforcement learning using policy-based proto-value functions
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Transferring Instances for Model-Based Reinforcement Learning
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Experiments with Adaptive Transfer Rate in Reinforcement Learning
Knowledge Acquisition: Approaches, Algorithms and Applications
Improving Batch Reinforcement Learning Performance through Transfer of Samples
Proceedings of the 2008 conference on STAIRS 2008: Proceedings of the Fourth Starting AI Researchers' Symposium
Transfer Learning and Intelligence: an Argument and Approach
Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
Probabilistic Policy Reuse for inter-task transfer learning
Robotics and Autonomous Systems
Transfer learning through indirect encoding
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Combining manual feedback with subsequent MDP reward signals for reinforcement learning
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Using spatial hints to improve policy reuse in a reinforcement learning agent
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Evolving Static Representations for Task Transfer
The Journal of Machine Learning Research
Activity knowledge transfer in smart environments
Pervasive and Mobile Computing
Integrating reinforcement learning with human demonstrations of varying ability
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Abstraction and generalization in reinforcement learning: a summary and framework
ALA'09 Proceedings of the Second international conference on Adaptive and Learning Agents
Reinforcement learning transfer via common subspaces
ALA'11 Proceedings of the 11th international conference on Adaptive and Learning Agents
Transfer learning via multiple inter-task mappings
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Transfer learning in multi-agent reinforcement learning domains
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Reinforcement learning transfer via sparse coding
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Transfer in reinforcement learning via shared features
The Journal of Machine Learning Research
Transferring task models in Reinforcement Learning agents
Neurocomputing
Learning potential functions and their representations for multi-task reinforcement learning
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
Temporal difference (TD) learning (Sutton and Barto, 1998) has become a popular reinforcement learning technique in recent years. TD methods, relying on function approximators to generalize learning to novel situations, have had some experimental successes and have been shown to exhibit some desirable properties in theory, but the most basic algorithms have often been found slow in practice. This empirical result has motivated the development of many methods that speed up reinforcement learning by modifying a task for the learner or helping the learner better generalize to novel situations. This article focuses on generalizing across tasks, thereby speeding up learning, via a novel form of transfer using handcoded task relationships. We compare learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrate that directly transferring the action-value function can lead to a dramatic speedup in learning with all three. Using transfer via inter-task mapping (TVITM), agents are able to learn one task and then markedly reduce the time it takes to learn a more complex task. Our algorithms are fully implemented and tested in the RoboCup soccer Keepaway domain. This article contains and extends material published in two conference papers (Taylor and Stone, 2005; Taylor et al., 2005).