Advances in parallel algorithms
Parallel programming with MPI
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
P3VI: a partitioned, prioritized, parallel value iterator
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Cooperative learning using advice exchange
Adaptive agents and multi-agent systems
A complexity analysis of cooperative mechanisms in reinforcement learning
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Expertness based cooperative Q-learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
In this paper, we investigate the use of parallelization in reinforcement learning (RL), with the goal of learning optimal policies for single-agent RL problems more quickly by using parallel hardware. Our approach is based on agents using the SARSA(λ) algorithm, with value functions represented using linear function approximators. In our proposed method, each agent learns independently in a separate simulation of the single-agent problem. The agents periodically exchange information extracted from the weights of their approximators, accelerating convergence towards the optimal policy. We develop three increasingly efficient versions of this approach to parallel RL, and present empirical results for an implementation of the methods on a Beowulf cluster.