A taxonomy of concepts for evaluating chess strength
Proceedings of the 1990 ACM/IEEE conference on Supercomputing
ACM SIGCSE Bulletin
Adversarial Search by Evolutionary Computation
Evolutionary Computation
Proceedings of the 2007 conference on Human interface: Part II
Paper: Models of human problem solving: Detection, diagnosis, and compensation for system failures
Automatica (Journal of IFAC)
Hi-index | 0.00 |
Chess has served as a convenient vehicle for studying cognition and perception (see de Groot [1965], Chase and Simon [1973]) as well as machine intelligence. Perhaps the central question for both of these research uses of chess is: How much chess-specific knowledge does it take to play at a given level of competence, for example, at the master level? It is difficult to say what chess-specific knowledge is, and it certainly consists of different types of knowledge, that must be considered independently of each other (for example, "book knowledge" obtained by studying chess books is quite different from experience obtained in over-the-board play). Even if one succeeds in defining what "chess-specific knowledge" is, there remains the difficulty of measuring it. Because of these difficulties, any approach to measuring the amount of knowledge possessed by a practitioner of a craft must be based on questionable assumptions, and any result obtained is subject to uncertainty and criticism. Only the inherent interest of the question posed justifies reporting on a rough and inconclusive experiment designed to answer one aspect of the tantalizing question: How much chess-specific knowledge does it take to play at a given level of competence?