Learning and Exploiting Relative Weaknesses of Opponent Agents

  • Authors:
  • Shaul Markovitch;Ronit Reger

  • Affiliations:
  • Computer Science Department, Technion, Israel Institute of Technology, Israel;Computer Science Department, Technion, Israel Institute of Technology, Israel

  • Venue:
  • Autonomous Agents and Multi-Agent Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Agents in a competitive interaction can greatly benefit from adapting to a particular adversary, rather than using the same general strategy against all opponents. One method of such adaptation isOpponent Modeling, in which a model of an opponent is acquired and utilized as part of the agent's decision procedure in future interactions with this opponent. However, acquiring an accurate model of a complex opponent strategy may be computationally infeasible. In addition, if the learned model is not accurate, then using it to predict the opponent's actions may potentially harm the agent's strategy rather than improving it. We thus define the concept ofopponent weakness, and present a method for learning a model of this simpler concept. We analyze examples of past behavior of an opponent in a particular domain, judging its actions using a trusted judge. We then infer aweakness model based on the opponent's actions relative to the domain state, and incorporate this model into our agent's decision procedure. We also make use of a similar self-weakness model, allowing the agent to prefer states in which the opponent is weak and our agent strong; where we have arelative advantage over the opponent. Experimental results spanning two different test domains demonstrate the agents' improved performance when making use of the weakness models.