Generalized learning automata for multi-agent reinforcement learning

  • Authors:
  • Yann-Michaël De Hauwere;Peter Vrancx;Ann Nowé

  • Affiliations:
  • (Correspd.) Computational Modeling Lab, Vrije Universiteit Brussel, Brussels, Belgium. E-mails: {ydehauwe, pvrancx, anowe}@vub.ac.be;-;-

  • Venue:
  • AI Communications - European Workshop on Multi-Agent Systems (EUMAS) 2009
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A major challenge in multi-agent reinforcement learning remains dealing with the large state spaces typically associated with realistic multi-agent systems. As the state space grows, agent policies become increasingly complex and learning slows down. Currently, advanced single-agent techniques are already very capable of learning optimal policies in large unknown environments. When multiple agents are present however, we are challenged by an increase of the state-action space, exponential in the number of agents, even though these agents do not always interfere with each other and thus their presence should not always be included in the state information of the other agent. A solution to this problem lies in the use of generalized learning automata (GLA). In this paper we will first demonstrate how GLA can help take the correct actions in large unknown multi-agent environments. Furthermore we introduce a framework capable of dealing with this issue of observing other agents. We also present an implementation of our framework, called 2observe which we apply to some gridworld problems. Finally, we demonstrate that our approach is capable of transferring its knowledge to new agents entering the environment.