Approximating state estimation in multiagent settings using particle filters

  • Authors:
  • Prashant Doshi;Piotr J. Gmytrasiewicz

  • Affiliations:
  • University of Illinois at Chicago, IL;University of Illinois at Chicago, IL

  • Venue:
  • Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

State estimation consists of updating an agent's belief given executed actions and observed evidence to date. In single agent environments, the state estimation can be formalized using the Bayes filter. Exact estimation can be performed in simple cases, but approximate techniques, like particle filtering, have been used in more realistic cases. This paper extends the particle filter to multiagent settings resulting in the interactive particle filter. The main difficulty we tackle is that to fully represent an agent's beliefs in such environments, one has to specify probability distributions over the physical state and over the beliefs of other agents. This leads to interactive hierarchical belief systems first developed in game theory. Since the update of such beliefs proceeds recursively, the interactive particle filter samples and propagates on all levels of the belief hierarchy. We present algorithms, discuss some of their properties, and illustrate the performance of our implementation using simple examples.