Mutual online concept learning for multiple agents

  • Authors:
  • Jun Wang;Les Gasser

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Champaign, IL;University of Illinois at Urbana-Champaign, Champaign, IL

  • Venue:
  • Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

To create multi-agent systems that are both adaptive and open, agents must collectively learn to generate and adapt their own concepts, ontologies, interpretations, and even languages actively in an online fashion. A central issue is the potential lack of any pre-existing concept to be learned; instead, agents may need to collectively design a concept that is evolving as they exchange information. This paper presents a framework for mutual online concept learning (MOCL) in a shared world. MOCL extends classical online concept learning from single-agent to multi-agent settings. Based on the Perceptron algorithm, we present a specific MOCL algorithm, called the mutual perceptron convergence algorithm, which can converge within a finite number of mistakes under some conditions. Analysis of the convergence conditions shows that the possibility of convergence depends on the quality of the instances they produce. Finally, we point out applications of MOCL and the convergence algorithm to the formation of adaptive ontological and linguistic knowledge such as dynamically generated shared vocabulary and grammar structures.