Modeling plan coordination in multiagent decision processes

  • Authors:
  • Ping Xuan

  • Affiliations:
  • Clark University, Worcester, MA

  • Venue:
  • Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In multiagent planning, it is often convenient to view a problem as two subproblems: agent local planning and coordination. Thus, we can classify agent activities into two categories: agent local problem solving activities and coordination activities, with each category of activities addressing the corresponding subproblem. However, recent mathematical models, such as decentralized Markov decision processes (DEC-MDP) and partially observable Markov decision processes (DEC-POMDP), view the problem as a single decision process and do not make the distinctions between agent local planning and coordination. In this paper, we present a synergistic representation that brings these two views together, and show that these two views are equivalent. Under this representation, traditional plan coordination mechanisms can be conveniently modeled and interpreted as approximation methods for solving the decision processes.