The emergence of understanding in a computer model of concepts analogy-making
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Analogy-making as perception: a computer model
Analogy-making as perception: a computer model
Journal of Experimental & Theoretical Artificial Intelligence
Fluid concepts and creative analogies: computer models of the fundamental mechanisms of thought
Fluid concepts and creative analogies: computer models of the fundamental mechanisms of thought
Letter spirit (part one): emergent high-level perception of letters using fluid concepts
Letter spirit (part one): emergent high-level perception of letters using fluid concepts
A glimpse at the metaphysics of Bongard problems
Artificial Intelligence
Chess and Machine Intuition
Godel, Escher, Bach: An Eternal Golden Braid
Godel, Escher, Bach: An Eternal Golden Braid
Metacat: a self-watching cognitive architecture for analogy-making and high-level perception
Metacat: a self-watching cognitive architecture for analogy-making and high-level perception
Letter spirit (part two): modeling creativity in a visual domain
Letter spirit (part two): modeling creativity in a visual domain
Learning extension parameters in game-tree search
Information Sciences—Informatics and Computer Science: An International Journal - Special issue: Heuristic search and computer game playing III
An Active Symbols Theory of Chess Intuition
Minds and Machines
Metaphor-based meaning excavation
Information Sciences: an International Journal
Artificial Dreams: The Quest for Non-Biological Intelligence
Artificial Dreams: The Quest for Non-Biological Intelligence
Hi-index | 0.07 |
Consider the chess game: When faced with a complex scenario, how does understanding arise in one's mind? How does one integrate disparate cues into a global, meaningful whole? How do players avoid the combinatorial explosion? How are abstract ideas represented? The purpose of this paper is to propose a new computational model of human chess cognition. We suggest that analogies and abstract roles are crucial to understanding a chess scenario. We present a proof-of-concept model, in the form of a computational architecture, which accounts for many crucial aspects of human play, such as (i) concentration of attention to relevant aspects, (ii) how humans may avoid the combinatorial explosion, (iii) perception of similarity at a strategic level, (iv) a state of meaningful anticipation over how a global scenario may evolve, and (v) the architecture's choice as an emergent phenomenon from the actions of subcognitive processes.