Stackelberg scheduling strategies
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Towards a Formalization of Teamwork with Resource Constraints
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 2
Source-location privacy in energy-constrained sensor network routing
Proceedings of the 2nd ACM workshop on Security of ad hoc and sensor networks
Computing the optimal strategy to commit to
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Security in multiagent systems by policy randomization
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Defending Critical Infrastructure
Interfaces
An efficient heuristic approach for security against multiple adversaries
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track
Mixed-integer programming methods for finding Nash equilibria
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Approximating optimal policies for agents with limited execution resources
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Prediction-based dynamic target interception using discrete Markov chains
ASMTA'10 Proceedings of the 17th international conference on Analytical and stochastic modeling techniques and applications
CARE@AI'09/CARE@IAT'10 Proceedings of the CARE@AI 2009 and CARE@IAT 2010 international conference on Collaborative agents - research and development
Hi-index | 0.00 |
We consider the problem of providing decision support to a patrolling or security service in an adversarial domain. The idea is to create patrols that can achieve a high level of coverage or reward while taking into account the presence of an adversary. We assume that the adversary can learn or observe the patrolling strategy and use this to its advantage. We follow two different approaches depending on what is known about the adversary. If there is no information about the adversary we use a Markov Decision Process (MDP) to represent patrols and identify randomized solutions that minimize the information available to the adversary. This lead to the development of algorithms CRLP and BRLP, for policy randomization of MDPs. Second, when there is partial information about the adversary we decide on efficient patrols by solving a Bayesian---Stackelberg games. Here, the leader decides first on a patrolling strategy and then an adversary, of possibly many adversary types, selects its best response for the given patrol. We provide two efficient MIP formulations named DOBSS and ASAP to solve this NP-hard problem. Our experimental results show the efficiency of these algorithms and illustrate how these techniques provide optimal and secure patrolling policies. We note that these models have been applied in practice, with DOBSS being at the heart of the ARMOR system that is currently deployed at the Los Angeles International airport (LAX) for randomizing checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals.