A learning-based coalition formation model for multiagent systems

  • Authors:
  • Leen-Kiat Soh;Xin Li

  • Affiliations:
  • University of Nebraska-Lincoln, Lincoln, NE;University of Nebraska-Lincoln, Lincoln, NE

  • Venue:
  • AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.02

Visualization

Abstract

In this paper, we present a learning-based coalition formation model that forms sub-optimal coalitions among agents to solve real-time constrained allocation problems in a dynamic, uncertain and noisy environment. This model consists of three stages (coalition planning, coalition instantiation and coalition evaluation) and an integrated learning framework. An agent first derives a coalition formation plan via case-based reasoning (CBR). Guided by this plan, the agent instantiates a coalition through negotiations with other agents. When the process completes, the agent evaluates the outcomes. The integrated learning framework involves multiple levels embedded in the three stages. At a low level on strategic and tactical details, the model allows an agent to learn how to negotiate. At the meta-level, an agent learns how to improve on its planning and the actual execution of the plan. The model uses an approach that synthesizes reinforcement learning (RL) and case-based learning (CBL). We have implemented the model partially and conducted experiments on CPU allocation in a multisensor target-tracking domain.