Philosophical foundations of artificial consciousness

  • Authors:
  • Ron Chrisley

  • Affiliations:
  • Centre for Research in Cognitive Science and Department of Informatics, University of Sussex, Brighton BN1 9QH, United Kingdom

  • Venue:
  • Artificial Intelligence in Medicine
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Objective: Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (AI). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of AI method. Since AI is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than AI supposedly is. The objective of this paper is to evaluate the soundness of this inference. Methods: The results are achieved by means of conceptual analysis and argumentation. Results and conclusions: It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of AI, and a lack of awareness of the possible roles AI might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the author's work in AC -interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology - are used to illustrate and motivate the distinctions and the defences of AC they make possible.