Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Automatic basis function construction for approximate dynamic programming and reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Samuel meets Amarel: automating value function approximation using global state space analysis
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Multi-agent, reward shaping for RoboCup KeepAway
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Hi-index | 0.00 |
RoboCup Keepaway is one of the most challenging multiagent systems (MAS) where a team of keepers tries to keep the ball away from the team of takers. Most of current works concentrate on the learning of keeper, not the learning of taker, which is also a great challenge to the application of reinforcement learning (RL). In this paper, we propose a task named takeaway for takers and study the learning of them. We employ an initial learning algorithm called Update on Steps (UoS) for takers and demonstrate that this algorithm has two main faults including action oscillation and reliance on designer's experience. Thereafter we present a novel RL algorithm called Dynamic CMAC Advantage Learning (DCMAC-AL). It makes use of advantage ($\lambda$) learning to calculate value function as well as CMAC to generalize state space, and creates novel features based on Bellman error to improve the precision of CMAC. Empirical results show that takers with DCMAC-AL can learn efficiently.