Coordinating Multiple Agents via Reinforcement Learning

  • Authors:
  • Gang Chen;Zhonghua Yang;Hao He;Kiah Mok Goh

  • Affiliations:
  • Information Communication Institute of Singapore, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798;Information Communication Institute of Singapore, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798;Singapore Institute of Manufacturing Technology, Singapore 638075;Singapore Institute of Manufacturing Technology, Singapore 638075

  • Venue:
  • Autonomous Agents and Multi-Agent Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we attempt to use reinforcement learning techniques to solve agent coordination problems in task-oriented environments. The Fuzzy Subjective Task Structure model (FSTS) is presented to model the general agent coordination. We show that an agent coordination problem modeled in FSTS is a Decision-Theoretic Planning (DTP) problem, to which reinforcement learning can be applied. Two learning algorithms, ``coarse-grained'' and ``fine-grained'', are proposed to address agents coordination behavior at two different levels. The ``coarse-grained'' algorithm operates at one level and tackle hard system constraints, and the ``fine-grained'' at another level and for soft constraints. We argue that it is important to explicitly model and explore coordination-specific (particularly system constraints) information, which underpins the two algorithms and attributes to the effectiveness of the algorithms. The algorithms are formally proved to converge and experimentally shown to be effective.