Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Temporal difference learning and TD-Gammon
Communications of the ACM
Top-down induction of first-order logical decision trees
Artificial Intelligence
Macro-Operators in Multirelational Learning: A Search-Space Reduction Technique
ECML '02 Proceedings of the 13th European Conference on Machine Learning
On the Stability of Example-Driven Learning Systems: A Case Study in Multirelational Learning
MICAI '02 Proceedings of the Second Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Inducing Shogi Heuristics Using Inductive Logic Programming
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Journal of Artificial Intelligence Research
Task performance under stressed and non-stressed conditions: emphasis on physiological approaches
ACIIDS'12 Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part III
Upper confidence tree-based consistent reactive planning application to minesweeper
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Hi-index | 0.00 |
Minesweeper is a one-person game which looks deceptively easy to play, but where average human performance is far from optimal. Playing the game requires logical, arithmetic and probabilistic reasoning based on spatial relationships on the board. Simply checking a board state for consistency is an NP-complete problem. Given the difficulty of hand-crafting strategies to play this and other games, AI researchers have always been interested in automatically learning such strategies from experience. In this paper, we show that when integrating certain techniques into a general purpose learning system (Mio), the resulting system is capable of inducing a Minesweeper playing strategy that beats the winning rate of average human players. In addition, we discuss the necessary background knowledge, present experimental results demonstrating the gain obtained with our techniques and show the strategy learned for the game.