Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Algorithms and assessment in computer poker
Algorithms and assessment in computer poker
An Analysis of UCT in Multi-player Games
CG '08 Proceedings of the 6th international conference on Computers and Games
Effective short-term opponent exploitation in simplified poker
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Approximating game-theoretic optimal strategies for full-scale poker
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
UCT for tactical assault planning in real-time strategy games
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Efficient selectivity and backup operators in Monte-Carlo tree search
CG'06 Proceedings of the 5th international conference on Computers and games
Smoothing Techniques for Computing Nash Equilibria of Sequential Games
Mathematics of Operations Research
Using counterfactual regret minimization to create competitive multiplayer poker agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Double-oracle algorithm for computing an exact nash equilibrium in zero-sum extensive-form games
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Solving extensive-form games with double-oracle methods
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Hi-index | 0.00 |
This article discusses two contributions to decision-making in complex partially observable stochastic games. First, we apply two state-of-the-art search techniques that use Monte-Carlo sampling to the task of approximating a Nash-Equilibrium (NE) in such games, namely Monte-Carlo Tree Search (MCTS) and Monte-Carlo Counterfactual Regret Minimization (MCCFR). MCTS has been proven to approximate a NE in perfect-information games. We show that the algorithm quickly finds a reasonably strong strategy (but not a NE) in a complex imperfect information game, i.e. Poker. MCCFR on the other hand has theoretical NE convergence guarantees in such a game. We apply MCCFR for the first time in Poker. Based on our experiments, we may conclude that MCTS is a valid approach if one wants to learn reasonably strong strategies fast, whereas MCCFR is the better choice if the quality of the strategy is most important. Our second contribution relates to the observation that a NE is not a best response against players that are not playing a NE. We present Monte-Carlo Restricted Nash Response (MCRNR), a sample-based algorithm for the computation of restricted Nash strategies. These are robust bestresponse strategies that (1) exploit non-NE opponents more than playing a NE and (2) are not (overly) exploitable by other strategies. We combine the advantages of two state-of-the-art algorithms, i.e. MCCFR and Restricted Nash Response (RNR). MCRNR samples only relevant parts of the game tree. We show that MCRNR learns quicker than standard RNR in smaller games. Also we show in Poker that MCRNR learns robust best-response strategies fast, and that these strategies exploit opponents more than playing a NE does.