Globally Optimal Multi-agent Reinforcement Learning Parameters in Distributed Task Assignment

  • Authors:
  • Dominik Dahlem;William Harrison

  • Affiliations:
  • -;-

  • Venue:
  • WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Large-scale simulation studies are necessary to study the learning behaviour of individual agents and the overall system dynamics. One reason is that planning algorithms to find optimal solutions to fully observable general decentralised Markov decision problems do not admit to polynomial-time worst-case complexity bounds. Additionally, agent interaction often implies a non-stationary environment which does not lend itself to asymptotically greedy policies. Therefore, policies with a constant level of exploration are required to be able to adapt continuously. This paper casts the application domain of distributed task assignment into the formalisms of queueing theory, complex networks and decentralised Markov decision problems to analyse the impact of the momentum of a standard back-propagation neural network function approximator and the discount factor of $SARSA(0)$ reinforcement learning and the $\epsilon$ parameter of the $\epsilon$-greedy policy. For this purpose large queueing networks of one thousand interacting agents are evolved. A Kriging metamodel is fitted and in combination with simulated annealing optimal operating conditions with respect to the total average response time are found. The insights gained from this study are significant in that they provide guidance in deploying large-scale distributed task assignment systems modelled as multi-agent queueing networks.