Distributed, heterogeneous, multi-agent social coordination via reinforcement learning

  • Authors:
  • Dongqing Shi;Michael Z. Sauter;Jerald D. Kralik

  • Affiliations:
  • Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH;Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH;Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH

  • Venue:
  • ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-agent systems are becoming more popular in a variety of problem domains that benefit from increased parallelism, system robustness, and scalability, ranging from search and rescue to investment management. Multi-agent systems analysis studies how multiple agents coordinate with each other to maximize some team goal or individual best reward. Coordination achieved through learning provides a great advantage over modeling methods, especially when tasks become very complex and environments more dynamic. Because social primates such as chimpanzees are a highly successful multi-agent system that uses learning to adapt flexibly to changing social and environmental conditions, we are attempting to simulate their social cognition and behavior. The paper presents a foraging task to study how multiple agents can use reinforcement learning to coordinate as a group under social constraints, while also trying to maximize their own reward. Each distributed, heterogenous agent uses the WoLFPHC algorithm, and with no communication, the agents learn to select the best foraging patch based on the behavior of others through the "Win or Learn Fast" heuristic. The simulation results demonstrate that the agents can perform in a manner similar to the natural social behavior of chimpanzees, and show that we have a working model system for studying more complex chimpanzee social behavior in the future.