Symbolism and enactivism: an experimental test of conflicting approaches to artificial intelligence

  • Authors:
  • S. F. Worgan;R. I. Damper

  • Affiliations:
  • Information: Signals, Images, Systems (ISIS) Research Group, School of Electronics and Computer Science, University of Southampton, Southampton, UK;Information: Signals, Images, Systems (ISIS) Research Group, School of Electronics and Computer Science, University of Southampton, Southampton, UK

  • Venue:
  • Journal of Experimental & Theoretical Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

One does not have to go very far into the subject of artificial intelligence (AI) before deep philosophical issues start to surface. The Turing test, the frame problem, the Chinese room argument, symbol grounding, the role of representation: these are just a few of the topics that have generated intense discussion and debate over the years. One particular controversy surrounds symbolism: whether or not it is necessary to have explicit symbol representation and, if so, the mechanism(s) by which these symbols come into existence. How are such questions to be answered? Generally, they are considered to lie in the realm of philosophy and not to be amenable to experimental tests. However, when even limited experimental testing is feasible, it can offer valuable insight. In this article, we present an empirical test designed to explore some important foundational issues in AI. An artificial agent inhabits a digital world (a cellular automaton) in which its cognitive abilities vary in three dimensions (size of symbol memory, percentage of symbols that are innate, planning depth), allowing us to position it in a space that reflects degree of commitment to key philosophical standpoints. One plane of this space corresponds to pure symbol attachment, another plane corresponds to pure symbol grounding, and the origin of coordinates corresponds to pure enactivism. We find that an enactivist (purely reactive) agent architecture can perform as well as one employing planning in this scenario, if properly designed. Planning has strengths when task/environment complexity make design difficult but weaknesses if an inappropriate world model is acquired (e.g. as a result of mismatch between model and task/environment complexity). However, the main claim of the article is that empirical exploration of the kind presented here could usefully form the initial phase of the design of many practical AI systems, and forms a valuable alternative to simply declaring a priori adherence to a particular philosophical position.