Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
An efficient heuristic approach for security against multiple adversaries
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 2
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track
Effective solutions for real-world Stackelberg games: when agents must deal with human uncertainties
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Simulation-based approach to general game playing
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Security games with incomplete information
ICC'09 Proceedings of the 2009 IEEE international conference on Communications
On the scalability of parallel UCT
CG'10 Proceedings of the 7th international conference on Computers and games
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Protecting moving targets with multiple mobile resources
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In Stackelberg games, a "leader" player first chooses a mixed strategy to commit to, then a "follower" player responds based on the observed leader strategy. Notable strides have been made in scaling up the algorithms for such games, but the problem of finding optimal leader strategies spanning multiple rounds of the game, with a Bayesian prior over unknown follower preferences, has been left unaddressed. Towards remedying this shortcoming we propose a first-of-a-kind tractable method to compute an optimal plan of leader actions in a repeated game against an unknown follower, assuming that the follower plays myopic best-response in every round. Our approach combines Monte Carlo Tree Search, dealing with leader exploration/exploitation tradeoffs, with a novel technique for the identification and pruning of dominated leader strategies. The method provably finds asymptotically optimal solutions and scales up to real world security games spanning double-digit number of rounds.