Adaptive dynamic programming: an introduction
IEEE Computational Intelligence Magazine
Hybrid ensemble approach for classification
Applied Intelligence
Ensemble methods for reinforcement learning with function approximation
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
The Journal of Supercomputing
Emergence of social norms through collective learning in networked agent societies
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Backward Q-learning: The combination of Sarsa algorithm and Q-learning
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.