Learning to coordinate in complex networks

  • Authors:
  • Sven Van Segbroeck;Steven De Jong;Ann Nowé;Francisco C Santos;Tom Lenaerts

  • Affiliations:
  • COMO, Vrije Universiteit Brussel, Brussels, Belgium, MLG, Université Libre de Bruxelles, Brussels, Belgium;COMO, Vrije Universiteit Brussel, Brussels, Belgium, DKE, Universiteit Maastricht, Maastricht, Netherlands;COMO, Vrije Universiteit Brussel, Brussels, Belgium;CENTRIA, Departamento de Informática, Faculdadede Ciências Tecnologia, Universidade Nova de Lisboa, Caparica, Portugal;COMO, Vrije Universiteit Brussel, Brussels, Belgium, MLG, Université Libre de Bruxelles, Brussels, Belgium

  • Venue:
  • Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Designing an adaptive multi-agent system often requires the specification of interaction patterns between the different agents. To date, it remains unclear to what extent such interaction patterns influence the dynamics of the learning mechanisms inherent to each agent in the system. Here, we address this fundamental problem, both analytically and via computer simulations, examining networks of agents that engage in stag-hunt games with their neighbors and thereby learn to coordinate their actions. We show that the specific network topology does not affect the game strategy the agents learn on average. Yet, network features such as heterogeneity and clustering effectively determine how this average game behavior arises and how it manifests itself. Network heterogeneity induces variation in learning speed, whereas network clustering results in the emergence of clusters of agents with similar strategies. Such clusters also form when the network structure is not predefined, but shaped by the agents themselves. In that case, the strategy of an agent may become correlated with that of its neighbors on the one hand, and with its degree on the other hand. Here, we show that the presence of such correlations drastically changes the overall learning behavior of the agents. As such, our work provides a clear-cut picture of the learning dynamics associated with networks of agents trying to optimally coordinate their actions.