Combinatorial optimization
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Taming decentralized POMDPs: towards efficient policy computation for multiagent settings
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Sequential resource allocation in multiagent systems with uncertainties
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Optimal and approximate Q-value functions for decentralized POMDPs
Journal of Artificial Intelligence Research
An investigation into mathematical programming for finite horizon decentralized POMDPs
Journal of Artificial Intelligence Research
Resource-driven mission-phasing techniques for constrained agents in stochastic environments
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Markov decision process (MDP) provides a framework for computing optimal policies for individual agents operating in uncertain environments. However, extending single-agent MDP techniques to multiagent problems is not straightforward. Previous complexity analyses have shown that the general decentralized Markov decision process (Dec-MDP) is NEXP-complete, which means that optimally solving a Dec-MDP is extremely difficult. The class of problems studied in this paper is a subclass of Dec-MDP in which two or more cooperative agents are tied together through the rewards of completing joint tasks but the actions taken by one agent do not impact other agents' transitions. Although this reduces the complexity class to NP-complete [4], efficiently solving such transition independent Dec-MDPs is still nontrivial.