Multi-agent policies: from centralized ones to decentralized ones
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Decentralized Markov Decision Processes with Event-Driven Interactions
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Communication for Improving Policy Computation in Distributed POMDPs
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
Reasoning about joint beliefs for execution-time communication decisions
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Decentralized planning under uncertainty for teams of communicating agents
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Interaction structure and dimensionality reduction in decentralized MDPs
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Decentralized control of cooperative systems: categorization and complexity analysis
Journal of Artificial Intelligence Research
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Agent interactions in decentralized environments
Agent interactions in decentralized environments
A bilinear programming approach for multiagent planning
Journal of Artificial Intelligence Research
Towards a unifying characterization for quantifying weak coupling in dec-POMDPs
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
QueryPOMDP: POMDP-based communication in multiagent systems
EUMAS'11 Proceedings of the 9th European conference on Multi-Agent Systems
Modeling information exchange opportunities for effective human-computer teamwork
Artificial Intelligence
Hi-index | 0.00 |
Variants of the decentralized MDP model focus on problems exhibiting some special structure that makes them easier to solve in practice. Our work is concerned with two main issues. First, we propose a new model, Event-Driven Interaction with Complex Rewards, that addresses problems having structured transition and reward dependence. Our model captures a wider range of problems than existing structured models. In spite of its generality, the model still offers structure that can be leveraged by heuristics and solution algorithms. This is facilitated by explicitly representing interactions as first-class entities. We formulate and solve instances of our model as bilinear programs. Second, we look at making offline planning for communication tractable. To this end, we propose heuristics that limit problem size by making communication available only at a few strategically chosen points based on an analysis that exploits problem structure in the proposed model. Experimental results demonstrate a reduction in problem size and solution time using restricted communication, with little or no decrease in solution quality. Our heuristics therefore allow us to solve problems that would otherwise be intractable.