Networked distributed POMDPs: a synthesis of distributed constraint optimization and POMDPs

  • Authors:
  • Ranjit Nair;Pradeep Varakantham;Milind Tambe;Makoto Yokoo

  • Affiliations:
  • Automation and Control Solutions, Honeywell Laboratories, Minneapolis, MN;Computer Science Department, University of Southern California, Los Angeles, CA;Computer Science Department, University of Southern California, Los Angeles, CA;Department of Intelligent Systems, Kyushu University, Fukuoka, Japan

  • Venue:
  • AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In many real-world multiagent applications such as distributed sensor nets, a network of agents is formed based on each agent's limited interactions with a small number of neighbors. While distributed POMDPs capture the real-world uncertainty in multiagent domains, they fail to exploit such locality of interaction. Distributed constraint optimization (DCOP) captures the locality of interaction but fails to capture planning under uncertainty. This paper present a new model synthesized from distributed POMDPs and DCOPs, called Networked Distributed POMDPs (ND-POMDPs). Exploiting network structure enables us to present two novel algorithms for ND-POMDPs: a distributed policy generation algorithm that performs local search and a systematic policy search that is guaranteed to reach the global optimal.