Temporal difference learning and TD-Gammon
Communications of the ACM
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Strategy evaluation in extensive games with importance sampling
Proceedings of the 25th international conference on Machine learning
One Jump Ahead: Computer Perfection at Checkers
One Jump Ahead: Computer Perfection at Checkers
Heuristic evaluation functions for general game playing
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Simulation-based approach to general game playing
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
GIB: imperfect information in a computationally challenging game
Journal of Artificial Intelligence Research
Approximating game-theoretic optimal strategies for full-scale poker
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Game-Tree search with adaptation in stochastic imperfect-information games
CG'04 Proceedings of the 4th international conference on Computers and Games
A real-time opponent modeling system for rush football
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Hi-index | 0.00 |
We present the card game Magic: The Gathering as an interesting test bed for AI research. We believe that the complexity of the game offers new challenges in areas such as search in imperfect information domains and opponent modelling. Since there are a thousands of possible cards, and many cards change the rules to some extent, to successfully build AI for Magic: The Gathering ultimately requires a rather general form of game intelligence (although we only consider a small subset of these cards in this paper). We create a range of players based on stochastic, rule-based and Monte Carlo approaches and investigate Monte Carlo search with and without the use of a sophisticated rule-based approach to generate game rollouts. We also examine the effect of increasing numbers of Monte Carlo simulations on playing strength and investigate whether Monte Carlo simulations can enable an otherwise weak player to overcome a stronger rule-based player. Overall, we show that Monte Carlo search is a promising avenue for generating a strong AI player for Magic: The Gathering.