MB-AIM-FSI: a model based framework for exploiting gradient ascent multiagent learners in strategic interactions

  • Authors:
  • Doran Chakraborty;Sandip Sen

  • Affiliations:
  • University of Texas, Austin, Austin, Texas;University of Tulsa, Tulsa, Oklahoma

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Future agent applications will increasingly represent human users autonomously or semi-autonomously in strategic interactions with similar entities. Hence, there is a growing need to develop algorithmic approaches that can learn to recognize commonalities in opponent strategies and exploit such commonalities to improve strategic response. Recently a framework [9] has been proposed that aims for targeted optimality against a set of finite memory opponents. We propose an approach that aims for targeted optimality against the set of all possible multiagent learning algorithms that perform gradient search to select a single stage Nash Equilibria of a repeated game. Such opponents induce a Markov Decision Process as the learning environment and appropriate responses to such environments are learned by assuming a generative model of the environment. In the absence of a generative model, we present a framework, MB-AIM-FSI, that models the opponent online based on interactions, solves the model off-line when sufficient information has been gathered, stores the strategy in the repository and finally uses it judiciously when playing against the same or similar opponent at a later time.