Complex value systems in friendly AI

  • Authors:
  • Eliezer Yudkowsky

  • Affiliations:
  • Singularity Institute for Artificial Intelligence, San Francisco, CA

  • Venue:
  • AGI'11 Proceedings of the 4th international conference on Artificial general intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A common reaction to first encountering the problem statement of Friendly AI ("Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome") is to propose a simple design which allegedly suffices; or to reject the problem by replying that "constraining" our creations is undesirable or unnecessary. This paper briefly presents some of the reasoning which suggests that Friendly AI is solvable, but not simply or trivially so, and that a wise strategy would be to invoke detailed learning of and inheritance from human values as a basis for further normalization and reflection.