Interaction-driven Markov games for decentralized multiagent planning under uncertainty
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We explore how local interactions can simplify the process of decision-making in multiagent systems. We review decentralized sparse-interaction Markov decision process [3] that explicitly distinguishes the situations in which the agents in the team must coordinate from those in which they can act independently. We situate this class of problems within different multiagent models, such as MMDPs and transition independent Dec-MDPs [2]. We contribute new algorithm for efficient planning in this class of problems. We provide empirical comparisons between our algorithms and other existing algorithms for this class of problems.