Generative encoding for multiagent learning

  • Authors:
  • David B. D'Ambrosio;Kenneth O. Stanley

  • Affiliations:
  • University of Central Florida, Orlando, FL, USA;University of Central Florida, Orlando, FL, USA

  • Venue:
  • Proceedings of the 10th annual conference on Genetic and evolutionary computation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper argues that multiagent learning is a potential "killer application" for generative and developmental systems (GDS) because key challenges in learning to coordinate a team of agents are naturally addressed through indirect encodings and information reuse. For example, a significant problem for multiagent learning is that policies learned separately for different agent roles may nevertheless need to share a basic skill set, forcing the learning algorithm to reinvent the wheel for each agent. GDS is a good match for this kind of problem because it specializes in ways to encode patterns of related yet varying motifs. In this paper, to establish the promise of this capability, the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) generative approach to evolving neurocontrollers learns a set of coordinated policies encoded by a single genome representing a team of predator agents that work together to capture prey. Experimental results show that it is not only possible, but beneficial to encode a heterogeneous team of agents with an indirect encoding. The main contribution is thus to open up a significant new application domain for GDS.