Operations Research
Valuation-based systems for Bayesian decision analysis
Operations Research
Learning models of other agents using influence diagrams
UM '99 Proceedings of the seventh international conference on User modeling
A language for modeling agents' decision making processes in games
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Representing and Solving Decision Problems with Limited Information
Management Science
Interactive dynamic influence diagrams
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Graphical models for interactive POMDPs: representations and solutions
Autonomous Agents and Multi-Agent Systems
A framework for sequential planning in multi-agent settings
Journal of Artificial Intelligence Research
Monte Carlo sampling methods for approximating interactive POMDPs
Journal of Artificial Intelligence Research
Bounded policy iteration for decentralized POMDPs
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Speeding up exact solutions of interactive dynamic influence diagrams using action equivalence
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Incorporating opponent models into adversary search
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Welldefined decision scenarios
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Efficient value of information computation
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Dealing with complex queries in decision-support systems
Data & Knowledge Engineering
Multiagent bayesian forecasting of structural time-invariant dynamic systems with graphical models
International Journal of Approximate Reasoning
Opponent modeling in a PGM framework
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
We consider the situation where two agents try to solve each their own task in a common environment. In particular, we study simple sequential Bayesian games with unlimited time horizon where two players share a visible scene, but where the tasks (termed assignments) of the players are private information. We present an influence diagram framework for representing simple type of games, where each player holds private information. The framework is used to model the analysis depth and time horizon of the opponent and to determine an optimal policy under various assumptions on analysis depth of the opponent. Not surprisingly, the framework turns out to have severe complexity problems even in simple scenarios due to the size of the relevant past. We propose two approaches for approximation. One approach is to use Limited Memory Influence Diagrams (LIMIDs) in which we convert the influence diagram into a set of Bayesian networks and perform single policy update. The other approach is information enhancement, where it is assumed that the opponent in a few moves will know your assignment. Empirical results are presented using a simple board game.