Computation and cognition: toward a foundation for cognitive science
Computation and cognition: toward a foundation for cognitive science
Artificial intelligence: the very idea
Artificial intelligence: the very idea
Unified theories of cognition
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Intelligence without representation
Artificial Intelligence
The role of learning in autonomous robots
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Perceptually grounded language acquisition: a neural/procedural hybrid model
Perceptually grounded language acquisition: a neural/procedural hybrid model
Learning concepts from sensor data of a mobile robot
Machine Learning - Special issue on robot learning
Cambrian intelligence: the early history of the new AI
Cambrian intelligence: the early history of the new AI
Computer science as empirical inquiry: symbols and search
Communications of the ACM
Artificial Intelligence: A Philosophical Introduction
Artificial Intelligence: A Philosophical Introduction
The Philosophy of Artificial Intelligence
The Philosophy of Artificial Intelligence
A Framework for Representing Knowledge
A Framework for Representing Knowledge
The altricial-precocial spectrum for robots
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Hi-index | 0.00 |
One does not have to go very far into the subject of artificial intelligence (AI) before deep philosophical issues start to surface. The Turing test, the frame problem, the Chinese room argument, symbol grounding, the role of representation: these are just a few of the topics that have generated intense discussion and debate over the years. One particular controversy surrounds symbolism: whether or not it is necessary to have explicit symbol representation and, if so, the mechanism(s) by which these symbols come into existence. How are such questions to be answered? Generally, they are considered to lie in the realm of philosophy and not to be amenable to experimental tests. However, when even limited experimental testing is feasible, it can offer valuable insight. In this article, we present an empirical test designed to explore some important foundational issues in AI. An artificial agent inhabits a digital world (a cellular automaton) in which its cognitive abilities vary in three dimensions (size of symbol memory, percentage of symbols that are innate, planning depth), allowing us to position it in a space that reflects degree of commitment to key philosophical standpoints. One plane of this space corresponds to pure symbol attachment, another plane corresponds to pure symbol grounding, and the origin of coordinates corresponds to pure enactivism. We find that an enactivist (purely reactive) agent architecture can perform as well as one employing planning in this scenario, if properly designed. Planning has strengths when task/environment complexity make design difficult but weaknesses if an inappropriate world model is acquired (e.g. as a result of mismatch between model and task/environment complexity). However, the main claim of the article is that empirical exploration of the kind presented here could usefully form the initial phase of the design of many practical AI systems, and forms a valuable alternative to simply declaring a priori adherence to a particular philosophical position.