An Introduction to Variational Methods for Graphical Models
Machine Learning
Approximating state estimation in multiagent settings using particle filters
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Compact approximations of mixture distributions for state estimation in multiagent settings
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Monte Carlo sampling methods for approximating interactive POMDPs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
State estimation in multiagent settings involves updating an agent's belief over the physical states and the space of other agents' models. Performance of the previous approach to state estimation, the interactive particle filter, degrades with large state spaces because it distributes the particles over both, the physical state space and the other agents' models. We present an improved method for estimating the state in a class of multiagent settings that are characterized in part by continuous or large discrete state spaces. We factor out the models of the other agents and update the agent's belief over these models, as exactly as possible. Simultaneously, we sample particles from the distribution over the large physical state space and project the particles in time. This approach is equivalent to Rao-Blackwellising the interactive particle filter. We focus our analysis on the special class of problems where the nested beliefs are represented using Gaussians, the problem dynamics using conditional linear Gaussians (CLGs) and the observation functions using softmax or CLGs. These distributions adequately represent many realistic applications.