Flocks, herds and schools: A distributed behavioral model
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
The Complexity of Decentralized Control of Markov Decision Processes
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Coordination and Belief Update in a Distributed Anti-Air Environment
HICSS '98 Proceedings of the Thirty-First Annual Hawaii International Conference on System Sciences-Volume 5 - Volume 5
Optimal design in collaborative design network
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Solving transition independent decentralized Markov decision processes
Journal of Artificial Intelligence Research
Planning under continuous time and resource uncertainty: a challenge for AI
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Partial evaluation for planning in multiagent expedition
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Hi-index | 0.00 |
DEC-POMDPs provide formal models of many cooperative multiagent problems, but their complexity is NEXP-complete in general. We investigate a sub-class of DEC-POMDPs termed multiagent expedition. A typical instance consists of an area populated by mobile agents. Agents have no prior knowledge of the area, have limited sensing and communication, and effects of their actions are uncertain. Success relies on planing actions that result in high accumulated rewards. We solve an instance of multiagent expedition based on collaborative design network, a decision theoretic multiagent graphical model. We present a number of techniques employed in knowledge representation and demonstrate the superior performance of our system in comparison to greedy agents experimentally.