Abstraction Methods for Game Theoretic Poker
CG '00 Revised Papers from the Second International Conference on Computers and Games
k-means++: the advantages of careful seeding
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
Lossless abstraction of imperfect information games
Journal of the ACM (JACM)
Abstraction pathologies in extensive games
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Approximating game-theoretic optimal strategies for full-scale poker
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Smoothing Techniques for Computing Nash Equilibria of Sequential Games
Mathematics of Operations Research
Using counterfactual regret minimization to create competitive multiplayer poker agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Accelerating best response calculation in large extensive games
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume One
Efficient Nash equilibrium approximation through Monte Carlo counterfactual regret minimization
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Hi-index | 0.00 |
Efficient algorithms exist for finding optimal policies in extensive-form games. However, human-scale problems are typically so large that this computation remains infeasible with modern computing resources. State-space abstraction techniques allow for the derivation of a smaller and strategically similar abstract domain, in which an optimal strategy can be computed and then used as a suboptimal strategy in the real domain. In this paper, we consider the task of evaluating the quality of an abstraction, independent of a specific abstract strategy. In particular, we use a recent metric for abstraction quality and examine imperfect recall abstractions, in which agents "forget" previously observed information to focus the abstraction effort on more recent and relevant state information. We present experimental results in the domain of Texas hold'em poker that validate the use of distribution-aware abstractions over expectation-based approaches, demonstrate that the new metric better predicts tournament performance, and show that abstractions built using imperfect recall outperform those built using perfect recall in terms of both exploitability and one-on-one play.