Operations Research
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Learning models of other agents using influence diagrams
UM '99 Proceedings of the seventh international conference on User modeling
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Rational Coordination in Multi-Agent Environments
Autonomous Agents and Multi-Agent Systems
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
A language for modeling agents' decision making processes in games
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Exact solutions of interactive POMDPs using behavioral equivalence
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Decision Analysis
Games with Incomplete Information Played by "Bayesian" Players, I-III
Management Science
Interactive dynamic influence diagrams
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
A framework for sequential planning in multi-agent settings
Journal of Artificial Intelligence Research
Multi-agent influence diagrams for representing and solving games
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
An Information-Theoretic Approach to Model Identification in Interactive Influence Diagrams
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Approximate solutions of interactive dynamic influence diagrams using model clustering
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Monte Carlo sampling methods for approximating interactive POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness.