Symbol grounding and its implications for artificial intelligence

  • Authors:
  • Michael J. Mayo

  • Affiliations:
  • School of Information Technology, Bond University, Gold Coast, Qld 4229, Australia

  • Venue:
  • ACSC '03 Proceedings of the 26th Australasian computer science conference - Volume 16
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general.1