Multi-agent plan adaptation using coordination patterns in team adversarial games

  • Authors:
  • Kennard Laviers

  • Affiliations:
  • U. of Central Florida

  • Venue:
  • Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

One issue with learning effective policies in multi-agent adversarial games is that the size of the search space can be prohibitively large when the actions of all the players are considered simultaneously. In most team games, players need to coordinate to accomplish tasks, either in a preplanned or emergent manner. An effective team policy must generate the necessary coordination, yet considering all possibilities for creating coordinating subgroups is computationally infeasible. I propose that reusable coordination patterns can be identified from successful training exemplars and used to guide multi-agent policy search. Experiments are conducted within the Rush 2008 football simulator and show how an analysis of mutual information and workflow can be used to identify subgroups of players that frequently coordinate within a particular formation. Using a K* classifier we devised a system to learns a ranking of the impact of subgroups on offensive performance. Results show how we can use knowledge of the top-ranked subgroup to focus search using two different policy generation methods 1) play adaptation and 2) UCT Monte Carlo (MC) planning. Our method produces superior plans which doubles the offensive team's performance in the Rush 2008 football simulator over prior methods.