Expected-Outcome: A General Model of Static Evaluation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining online and offline knowledge in UCT
Proceedings of the 24th international conference on Machine learning
CG '08 Proceedings of the 6th international conference on Computers and Games
Monte-Carlo Tree Search Solver
CG '08 Proceedings of the 6th international conference on Computers and Games
Parallel Monte-Carlo Tree Search
CG '08 Proceedings of the 6th international conference on Computers and Games
Simulation-based approach to general game playing
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
Efficient selectivity and backup operators in Monte-Carlo tree search
CG'06 Proceedings of the 5th international conference on Computers and games
Bandit based monte-carlo planning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Computational experiments with the RAVE heuristic
CG'10 Proceedings of the 7th international conference on Computers and games
Score bounded Monte-Carlo tree search
CG'10 Proceedings of the 7th international conference on Computers and games
Improving Monte-Carlo tree search in Havannah
CG'10 Proceedings of the 7th international conference on Computers and games
Monte-Carlo opening books for amazons
CG'10 Proceedings of the 7th international conference on Computers and games
Enhancements for multi-player Monte-Carlo tree search
CG'10 Proceedings of the 7th international conference on Computers and games
Monte-Carlo tree search and rapid action value estimation in computer Go
Artificial Intelligence
Design and parametric considerations for artificial neural network pruning in UCT game playing
Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference
Hi-index | 0.00 |
Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantially. Also in the game of Lines of Action (LOA), which has been dominated so far by αβ, MCTS is making an inroad. In this paper we investigate how to use a positional evaluation function in a Monte-Carlo simulation-based LOA program (MC-LOA). Four different simulation strategies are designed, called Evaluation Cut-Off, Corrective, Greedy, and Mixed. They use an evaluation function in several ways. Experimental results reveal that the Mixed strategy is the best among them. This strategy draws the moves randomly based on their transition probabilities in the first part of a simulation, but selects them based on their evaluation score in the second part of a simulation. Using this simulation strategy the MC-LOA program plays at the same level as the αβ program MIA, the best LOA-playing entity in the world.