Self-improving AI: an Analysis

  • Authors:
  • John Storrs Hall

  • Affiliations:
  • Storrmont, Laporte, USA

  • Venue:
  • Minds and Machines
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a "child machine" which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.