A hierarchical approach to computer Hex
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Computational Intelligence: Concepts to Implementations
Computational Intelligence: Concepts to Implementations
Combining online and offline knowledge in UCT
Proceedings of the 24th international conference on Machine learning
Monte-Carlo simulation balancing
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Achieving master level play in 9×9 computer go
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Adding expert knowledge and exploration in monte-carlo tree search
ACG'09 Proceedings of the 12th international conference on Advances in Computer Games
Hi-index | 0.00 |
Monte-Carlo Tree Search (MCTS) grows a partial game tree and uses a large number of random simulations to approximate the values of the nodes. It has proven effective in games with such as Go and Hex where the large search space and difficulty of evaluating positions cause difficulties for standard methods. The best MCTS players use carefully hand-crafted rules to bias the random simulations. Obtaining good hand-crafting rules is a very difficult process, as even rules promoting better simulation play can result in a weaker MCTS system [12]. Our Hivemind system uses evolution strategies to automatically learn effective rules for biasing the random simulations. We have built a MCTS player using Hivemind for the game Hex. The Hivemind learned rules result in a 90% win rate against a baseline MCTS system, and significant improvement against the computer Hex world champion, MoHex.