Reorganization of Agent Networks with Reinforcement Learning Based on Communication Delay

  • Authors:
  • Kazuki Urakawa;Toshiharu Sugawara

  • Affiliations:
  • -;-

  • Venue:
  • WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose the team formation method for task allocations in agent networks by reinforcement learning based on communication delay and by reorganization of agent networks. A task in a distributed environment like an Internet application, such as grid computing and service-oriented computing, is usually achieved by doing a number of subtasks. These subtasks are constructed on demand in a bottom-up manner and must be done with appropriate agents that have capabilities and computational resources required in each subtask. Therefore, the efficient and effective allocation of tasks to appropriate agents is a key issue in this kind of system. In our model, this allocation problem is formulated as the team formation of agents in the task-oriented domain. From this perspective, a number of studies were conducted in which learning and reorganization were incorporated. The aim of this paper is to extend the conventional method from two viewpoints. First, our proposed method uses only information available locally for learning, so as to make this method applicable to real systems. Second, we introduce the elimination of links as well as the generation of links in the agent network to improve learning efficiency. We experimentally show that this extension can considerably improve the efficiency of team formation compared with the conventional method. We also show that it can make the agent network adaptive to environmental changes.