Multiagent reactive plan application learning in dynamic environments

  • Authors:
  • Hüseyin Sevay;Costas Tsatsoulis

  • Affiliations:
  • University of Kansas, Lawrence, KS;University of Kansas, Lawrence, KS

  • Venue:
  • Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

In addition to bottom-up learning approaches, which facilitate emergent policy learning, it also is desirable to have top down control over learning so that a team of agents can also learn to apply general policies to diverse dynamic situations. We present a multiagent case-based learning methodology to achieve this top-down control. In this methodology, high-level symbolic plans describe policies a team of agents needs to learn to apply to different situations. For each plan whose preconditions match their current team state, agents learn to operationalize that plan. In each training scenario, each agent learns a sequence of actions that implements each step in the given plan such that the entire plan is operationalized under current external conditions. This application knowledge is acquired via searching through a small set of available high-level actions and testing the success of each sequence of actions in the situated environment. Similarity between a new situation and existing cases is measured by considering only the state internal to the team, and an agent stores the successful sequence of actions in the current plan step indexed under the current external state. By repeating this process for each plan step using many diverse training scenarios, a team of agents learns how to operationalize an entire plan in a wide variety of external situations, hence achieving generality. We demonstrate our approach using the RoboCup soccer simulator.