Adaptive, Distributed Control of Constrained Multi-Agent Systems

  • Authors:
  • Stefan Bieniawski;David H. Wolpert

  • Affiliations:
  • Stanford University;NASA Ames Research Center

  • Venue:
  • AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 3
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Product Distribution (PD) theory was recently developed as a framework for analyzing and optimizing distributed systems. In this paper we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASýs), i.e., for distributed stochastic optimization using MASýs. One common way to perform the optimization is to have each agent run a Reinforcement Learning (RL) algorithm. PD theory provides an alternative based upon using a variant of Newtonýs method operating on the agentýs probability distributions. We compare this alternative to RL-based search in three sets of computer experiments. The PD-theory-based approach outperforms the RL-based scheme in all three domains.