Accelerating evolution via egalitarian social learning

  • Authors:
  • Wesley Tansey;Eliana Feasley;Risto Miikkulainen

  • Affiliations:
  • The University of Texas at Austin, Austin, TX, USA;The University of Texas at Austin, Austin, TX, USA;The University of Texas at Austin, Austin, TX, USA

  • Venue:
  • Proceedings of the 14th annual conference on Genetic and evolutionary computation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Social learning is an extension to evolutionary algorithms that enables agents to learn from observations of others in the population. Historically, social learning algorithms have employed a student-teacher model where the behavior of one or more high-fitness agents is used to train a subset of the remaining agents in the population. This paper presents ESL, an egalitarian model of social learning in which agents are not labeled as teachers or students, instead allowing any individual receiving a sufficiently high reward to teach other agents to mimic its recent behavior. We validate our approach through a series of experiments in a robot foraging domain, including comparisons of egalitarian social learning with baseline neuroevolution and a variant of student-teacher social learning. In a complex foraging task, ESL converges to near-optimal strategies faster than either benchmark approach, outperforming both by more than an order of magnitude. The results indicate that egalitarian social learning is a promising new paradigm for social learning in intelligent agents.