Learning, social intelligence and the turing test: why an "out-of-the-box" turing machine will not pass the turing test

  • Authors:
  • Bruce Edmonds;Carlos Gershenson

  • Affiliations:
  • Centre for Policy Modelling, Manchester Metropolitan University, Manchester, United Kingdom;Departmento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, México, D.F., M ...

  • Venue:
  • CiE'12 Proceedings of the 8th Turing Centenary conference on Computability in Europe: how the world computes
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Turing Test checks for human intelligence, rather than any putative general intelligence. It involves repeated interaction requiring learning in the form of adaption to the human conversation partner. It is a macro-level post-hoc test in contrast to the definition of a Turing machine, which is a prior micro-level definition. This raises the question of whether learning is just another computational process, i.e., can be implemented as a Turing machine. Here we argue that learning or adaption is fundamentally different from computation, though it does involve processes that can be seen as computations. To illustrate this difference we compare (a) designing a Turing machine and (b) learning a Turing machine, defining them for the purpose of the argument. We show that there is a well-defined sequence of problems which are not effectively designable but are learnable, in the form of the bounded halting problem. Some characteristics of human intelligence are reviewed including it's: interactive nature, learning abilities, imitative tendencies, linguistic ability and context-dependency. A story that explains some of these is the Social Intelligence Hypothesis. If this is broadly correct, this points to the necessity of a considerable period of acculturation (social learning in context) if an artificial intelligence is to pass the Turing Test. Whilst it is always possible to ‘compile' the results of learning into a Turing machine, this would not be a designed Turing machine and would not be able to continually adapt (pass future Turing Tests). We conclude three things, namely that: a purely "designed" Turing machine will never pass the Turing Test; that there is no such thing as a general intelligence since it necessarily involves learning; and that learning/adaption and computation should be clearly distinguished.