Learning a common language through an emergent interaction topology

  • Authors:
  • Samarth Swarup;Kiran Lakkaraju;Les Gasser

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana, IL;University of Illinois at Urbana-Champaign, Urbana, IL;University of Illinois at Urbana-Champaign, Urbana, IL

  • Venue:
  • AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study the effects of various emergent topologies of interaction on the rate of language convergence in a population of communicating agents. The agents generate, parse, and learn sentences from each other using recurrent neural networks. An agent chooses another agent to learn from, based on that agent's fitness. Fitness is defined to include a frequency-dependent term capturing the approximate number of interactions an agent has had with others---its "popularity" as a teacher. This method of frequency-dependent selection is based on our earlier Noisy Preferential Attachment algorithm, which has been shown to produce various network topologies, including scale-free and small-world networks. We show that convergence occurs much more quickly with this strategy than it does for uniformly random interactions. In addition, this strategy more closely represents choice preference dynamics in large natural populations, and so may be more realistic as a model for adaptive language.