Multiagent reactive plan application learning in dynamic environments

  • Authors:
  • Hüseyin Sevay;Costas Tsatsoulis

  • Affiliations:
  • Department of Computer Engineering, Near East University, Nicosia, Cyprus;Department of Computer Science and Engineering, University of North Texas, Denton, TX

  • Venue:
  • Proceedings of the 15th WSEAS international conference on Computers
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

With bottom-up learning approaches such as reinforcement learning (RL), a team of agents can only learn emergent policies. However it may also be desirable to constrain policy search from top-down so that a team can learn more explicit policies in dynamic environments with continuous search spaces. In this paper we present a multiagent learning methodology that combines case-based learning and RL to address this need. Symbolic plans describe at a high-level the policies that a team of agents needs to learn for a wide variety of situations. For each high-level plan whose preconditions match the current state of their team, agents learn how to operationalize each step in that plan. For each training scenario, a team learns to find a sequence of actions that each agent in that team can execute such that each plan step can be operationalized under current external conditions; this application knowledge is acquired via RL. We use simulated robotic soccer to demonstrate this learning approach.