Approximating state estimation in multiagent settings using particle filters
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Compact approximations to Bayesian predictive distributions
ICML '05 Proceedings of the 22nd international conference on Machine learning
Improved state estimation in multiagent settings with continuous or large discrete state spaces
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
In order to act rationally, an agent must track the state of the environment over time. In the presence of other agents who themselves act, observe, and update their beliefs the agent must track not only the physical state but also the possible states of others. This is because others' actions may affect the evolution of the physical state and the agent's payoffs. One approach is to generalize the Bayes filter to multiagent settings, in which an agent tracks the evolution of the interactive state [2]. In practice, the estimation may be carried out using the interactive PF (I-PF) [2] that generalizes the PF to the multiagent setting.