Hybrid Q-learning algorithm about cooperation in MAS

  • Authors:
  • Wei Chen;Jing Guo;Xiong Li;Jie Wang

  • Affiliations:
  • Automation Faculty, GuangDong University of Technology, GuangZhou, GuangDong, China;Automation Faculty, GuangDong University of Technology, GuangZhou, GuangDong, China;Automation Faculty, GuangDong University of Technology, GuangZhou, GuangDong, China;Automation Faculty, GuangDong University of Technology, GuangZhou, GuangDong, China

  • Venue:
  • CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In most cases, agent learning tends to be a good method for solving challenging problems in multi-agent System (MAS). Since the learning efficiency is significantly different according to the actions taken by each specific agent, suitable algorithms will play important roles in the answer of the mentioned problems in multi-agent system. Although many related work are addressed to different algorithms of agent learning, few of them could balance efficiency and accuracy. In this paper, a hybrid Q-learning algorithm named CE-NNR which is springed form the CEQ learning and NNR Q-learning is presented. The algorithm is then well extended to RoboCup soccer simulation system and is proved to be reasonable with the experimental results arranged at the end of this paper.