Singular extensions: adding selectivity to brute-force searching
Artificial Intelligence - Special issue on computer chess
Case-based reasoning
TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Evolving neural networks to focus minimax search
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Learning to Play Chess Using Temporal Differences
Machine Learning
Computer chess move-ordering schemes using move influence
Artificial Intelligence
Learning Search Control Knowledge: An Explanation-Based Approach
Learning Search Control Knowledge: An Explanation-Based Approach
Machine Learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Machine learning in games: a survey
Machines that learn to play games
Temporal difference learning applied to a high-performance game-playing program
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Universal parameter optimisation in games based on SPSA
Machine Learning
Dynamic control in real-time heuristic search
Journal of Artificial Intelligence Research
A methodology for learning players| styles from game records
International Journal of Artificial Intelligence and Soft Computing
Automated discovery of search-extension features
ACG'09 Proceedings of the 12th international conference on Advances in Computer Games
RSPSA: enhanced parameter optimization in games
ACG'05 Proceedings of the 11th international conference on Advances in Computer Games
The emergence of choice: Decision-making and strategic thinking through analogies
Information Sciences: an International Journal
Hi-index | 0.00 |
The strength of a program for playing an adversary game like chess or checkers is greatly influenced by how selectively it explores the various branches of the game tree. Typically, some branch paths are discontinued early while others are explored more deeply. Finding the best set of parameters to control these extensions is a difficult, time-consuming, and tedious task. In this paper we describe a method for automatically tuning search-extension parameters in adversary search. Based on the new method, two learning variants are introduced: one for offline learning and the other for online learning. The two approaches are compared and experimental results provided in the domain of chess.