CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Managing gigabytes (2nd ed.): compressing and indexing documents and images
Computer science as empirical inquiry: symbols and search
Communications of the ACM
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
The Philosophy of Artificial Intelligence
The Philosophy of Artificial Intelligence
Ontology based object categorisation for robots
AOW '05 Proceedings of the 2005 Australasian Ontology Workshop - Volume 58
A methodology for grounding representations
PCAR '06 Proceedings of the 2006 international symposium on Practical cognitive agents and robots
Autonomous Agents and Multi-Agent Systems
From sensorimotor graphs to rules: an agent learns from a stream of experience
AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
Meaning in Artificial Agents: The Symbol Grounding Problem Revisited
Minds and Machines
On Machine Symbol Grounding and Optimization
International Journal of Cognitive Informatics and Natural Intelligence
Hi-index | 0.00 |
In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general.1