Reasoning about knowledge
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Planning and acting in partially observable stochastic domains
Artificial Intelligence
A Probabilistic Approach to Collaborative Multi-Robot Localization
Autonomous Robots
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Interactive POMDPs: Properties and Preliminary Results
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
A framework for sequential planning in multi-agent settings
Journal of Artificial Intelligence Research
Exact solutions of interactive POMDPs using behavioral equivalence
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Approximate state estimation in multiagent settings with continuous or large discrete state spaces
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Graphical models for interactive POMDPs: representations and solutions
Autonomous Agents and Multi-Agent Systems
Compact approximations of mixture distributions for state estimation in multiagent settings
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
A particle filtering based approach to approximating interactive POMDPs
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Improved state estimation in multiagent settings with continuous or large discrete state spaces
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Monte Carlo sampling methods for approximating interactive POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
State estimation consists of updating an agent's belief given executed actions and observed evidence to date. In single agent environments, the state estimation can be formalized using the Bayes filter. Exact estimation can be performed in simple cases, but approximate techniques, like particle filtering, have been used in more realistic cases. This paper extends the particle filter to multiagent settings resulting in the interactive particle filter. The main difficulty we tackle is that to fully represent an agent's beliefs in such environments, one has to specify probability distributions over the physical state and over the beliefs of other agents. This leads to interactive hierarchical belief systems first developed in game theory. Since the update of such beliefs proceeds recursively, the interactive particle filter samples and propagates on all levels of the belief hierarchy. We present algorithms, discuss some of their properties, and illustrate the performance of our implementation using simple examples.