Regional Cooperative Multi-agent Q-learning Based on Potential Field

  • Authors:
  • Liang Liu;Longshu Li

  • Affiliations:
  • -;-

  • Venue:
  • ICNC '08 Proceedings of the 2008 Fourth International Conference on Natural Computation - Volume 06
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

More and more Artificial Intelligence researchers focused on the reinforcement learning (RL)-based multi-agent system (MAS). Multi-agent learning problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning. However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. In this paper we investigate a regional cooperative of the Q-function based on potential field by only considering the joint actions in those states in which coordination is actually required. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We have performed experiments in RoboCup simulation-2D which is the ideal testing platform of Multi-agent systems and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results