The Complexity of Decentralized Control of Markov Decision Processes
Mathematics of Operations Research
Optimizing information exchange in cooperative multi-agent systems
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
The complexity of multiagent systems: the price of silence
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Value-based observation compression for DEC-POMDPs
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Formal models and algorithms for decentralized decision making under uncertainty
Autonomous Agents and Multi-Agent Systems
Dynamic programming for partially observable stochastic games
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Agent influence as a predictor of difficulty for decentralized problem-solving
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
The communicative multiagent team decision problem: analyzing teamwork theories and models
Journal of Artificial Intelligence Research
A framework for sequential planning in multi-agent settings
Journal of Artificial Intelligence Research
Memory-bounded dynamic programming for DEC-POMDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Agent interactions in decentralized environments
Agent interactions in decentralized environments
Hi-index | 0.00 |
Dec-POMDPs are worst-case intractable. Recent approximation algorithms have achieved positive results, but performance depends upon parameters set in advance, and little is known about how to choose those settings. We provide an information-theoretic measure of agent influence, shown to be (1) a good indicator of algorithm performance, and (2) a clue to setting algorithm parameters, potentially improving runtime and memory requirements dramatically without sacrificing quality.