Searching with probabilities
Game tree searching by min/max approximation
Artificial Intelligence
Conspiracy numbers for min-max search
Artificial Intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Adaptive learning of decision-theoretic search control knowledge
Proceedings of the sixth international workshop on Machine learning
Proceedings of the first international conference on Principles of knowledge representation and reasoning
Decision-Theoretic Control of Reasoning: General Theory and an
Decision-Theoretic Control of Reasoning: General Theory and an
An analysis of error recovery and sensory integration for dynamic planners
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Real-time metareasoning with dynamic trade-off evaluation
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Improved decision-making in game trees: recovering from pathology
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Forward estimation for game-tree search
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Strategic points to minimize time cost for decision making under asynchronous time constraints
WISS'10 Proceedings of the 2010 international conference on Web information systems engineering
Current challenges in multi-player game search
CG'04 Proceedings of the 4th international conference on Computers and Games
A domain-independent framework for modeling emotion
Cognitive Systems Research
An evolutionary tuned driving system for virtual car racing games: The AUTOPIA driver
International Journal of Intelligent Systems
Hi-index | 0.00 |
In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the expected value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low-overhead algorithm, MGSS*, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of Othello soundly beats an alpha-beta search while expanding significantly fewer nodes, even though both programs use the same evaluation function.