Using conflict resolution to inform decentralized learning

  • Authors:
  • Shanjun Cheng;Anita Raja;Victor R. Lesser

  • Affiliations:
  • Altisource, Greensboro, NC, USA;The University of North Carolina at Charlotte, Charlotte, NC, USA;University of Massachusetts Amherst, Amherst, MA, USA

  • Venue:
  • Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning consistent policies in decentralized settings is often problematic. The agents have a myopic view of their neighboring states that could lead to inconsistent action choices. The fundamental question addressed in this work is how to determine and obtain the minimal overlapping context among decentralized decision makers required to make their decisions more consistent. Our approach is a two-phased learning process where agents first learn their policies offline within the context of a simplified environment where it is not necessary to know detailed context information about neighbors. These local policies are then applied in more complex "real" environments where it is expected that agents will encounter a much higher rate of inconsistencies (conflicts) with neighborhood actions. When conflicts are observed, agents switch to "special" states that augment local policy states with additional non-local state information and learn other actions to take in this specific situation. This results in action choices that are less likely to lead to conflicts. We evaluate our approach by addressing meta-level decisions in a complex multiagent weather tracking domain. Experimental results show that our approach achieves good performance on utility and conflict resolution by exploring only a small fraction of the whole search space.