A Learning Automata Approach to Multi-agent Policy Gradient Learning

  • Authors:
  • Maarten Peeters;Ville Könönen;Katja Verbeeck;Ann Nowé

  • Affiliations:
  • Computational Modeling Lab, Vrije Universiteit Brussel, Brussel, Belgium 1050;VTT Technical Research Centre of Finland, Oulu, Finland FI-90571;KaHo Sint-Lieven Information Technology Group, , Gent, Belgium B-9000;Computational Modeling Lab, Vrije Universiteit Brussel, Brussel, Belgium 1050

  • Venue:
  • KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The policy gradient method is a popular technique for implementing reinforcement learning in an agent system. One of the reasons is that a policy gradient learner has a simple design and strong theoretical properties in single-agent domains. Previously, Williams showed that the REINFORCE algorithm is a special case of policy gradient learning. He also showed that a learning automaton could be seen as a special case of the REINFORCE algorithm. Learning automata theory guarantees that a group of automata will converge to a stable equilibrium in team games. In this paper we will show a theoretical connection between learning automata and policy gradient methods to transfer this theoretical result to multi-agent policy gradient learning. An appropriate exploration technique is crucial for the convergence of a multi-agent system. Since learning automata are guaranteed to converge, they posses such an exploration. We identify the identical mapping of a learning automaton onto the Boltzmann exploration strategy with an suitable temperature setting. The novel idea is that the temperature of the Boltzmann function is not dependent on time but on the action probabilities of the agents.