Efficient algorithms to solve Bayesian Stackelberg games for security applications

  • Authors:
  • Praveen Paruchuri;Jonathan P. Pearce;Janusz Marecki;Milind Tambe;Fernando Ordonez;Sarit Kraus

  • Affiliations:
  • Intelligent Automation Inc., Rockville, MD;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;University of Southern California, Los Angeles, CA;Bar-Ilan University, Israel

  • Venue:
  • AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the adversary/follower) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the type of the adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and an adversary (follower) can observe this strategy over time before choosing where to attack. We present here two different MIP-formulations, ASAP (providing approximate policies with controlled randomization) and DOBSS (providing optimal policies) for Bayesian Stackelberg games. DOBSS is currently the fastest optimal procedure for Bayesian Stackelberg games and is in use by police at the Los Angeles International Airport(LAX) to schedule their activities.