Emerging Cooperation With Minimal Effort: Rewarding Over Mimicking

  • Authors:
  • G. N. Yannakakis;J. Levine;J. Hallam

  • Affiliations:
  • Univ. of Southern Denmark, Odense;-;-

  • Venue:
  • IEEE Transactions on Evolutionary Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper compares supervised and unsupervised learning mechanisms for the emergence of cooperative multiagent spatial coordination using a top-down approach. By observing the global performance of a group of homogeneous agents-supported by a nonglobal knowledge of their environment-we attempt to extract information about the minimum size of the agent neurocontroller and the type of learning mechanism that collectively generate high-performing and robust behaviors with minimal computational effort. Consequently, a methodology for obtaining controllers of minimal size is introduced and a comparative study between supervised and unsupervised learning mechanisms for the generation of successful collective behaviors is presented. We have developed a prototype simulated world for our studies. This case study is primarily a computer games inspired world but its main features are also biologically plausible. The two specific tasks that the agents are tested in are the competing strategies of obstacle-avoidance and target-achievement. We demonstrate that cooperative behavior among agents, which is supported only by limited communication, appears to be necessary for the problem's efficient solution and that learning by rewarding the behavior of agent groups constitutes a more efficient and computationally preferred generic approach than supervised learning approaches in such complex multiagent worlds