Achieving network optima using Stackelberg routing strategies
IEEE/ACM Transactions on Networking (TON)
Mathematical Programming: Series A and B
Computing the optimal strategy to commit to
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Security in multiagent systems by policy randomization
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Defending Critical Infrastructure
Interfaces
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
The impact of adversarial knowledge on adversarial planning in perimeter patrol
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track
Computing optimal randomized resource allocations for massive security games
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Uncertainties in adversarial patrol
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Adversarial uncertainty in multi-robot patrol
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
On events in multi-robot patrol in adversarial environments
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 2 - Volume 2
A graph-theoretic approach to protect static and moving targets from adversaries
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Stackelberg vs. Nash in security games: interchangeability, equivalence, and uniqueness
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Security applications: lessons of real-world deployment
ACM SIGecom Exchanges
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Journal of Artificial Intelligence Research
Adversarial Geospatial Abduction Problems
ACM Transactions on Intelligent Systems and Technology (TIST)
Multi-robot adversarial patrolling: facing a full-knowledge opponent
Journal of Artificial Intelligence Research
Playing repeated Stackelberg games with unknown opponents
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
A unified method for handling discrete and continuous uncertainty in Bayesian Stackelberg games
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Security games with interval uncertainty
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
An extended study on multi-objective security games
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
How do we build multiagent algorithms for agent interactions with human adversaries? Stackelberg games are natural models for many important applications that involve human interaction, such as oligopolistic markets and security domains. In Stackelberg games, one player, the leader, commits to a strategy and the follower makes their decision with knowledge of the leader's commitment. Existing algorithms for Stackelberg games efficiently find optimal solutions (leader strategy), but they critically assume that the follower plays optimally. Unfortunately, in real-world applications, agents face human followers (adversaries) who --- because of their bounded rationality and limited observation of the leader strategy --- may deviate from their expected optimal response. Not taking into account these likely deviations when dealing with human adversaries can cause an unacceptable degradation in the leader's reward, particularly in security applications where these algorithms have seen real-world deployment. To address this crucial problem, this paper introduces three new mixed-integer linear programs (MILPs) for Stackelberg games to consider human adversaries, incorporating: (i) novel anchoring theories on human perception of probability distributions and (ii) robustness approaches for MILPs to address human imprecision. Since these new approaches consider human adversaries, traditional proofs of correctness or optimality are insufficient; instead, it is necessary to rely on empirical validation. To that end, this paper considers two settings based on real deployed security systems, and compares 6 different approaches (three new with three previous approaches), in 4 different observability conditions, involving 98 human subjects playing 1360 games in total. The final conclusion was that a model which incorporates both the ideas of robustness and anchoring achieves statistically significant better rewards and also maintains equivalent or faster solution speeds compared to existing approaches.