Asymptotic non-learnability of universal agents with computable horizon functions

  • Authors:
  • Laurent Orseau

  • Affiliations:
  • -

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2013

Quantified Score

Hi-index 5.23

Visualization

Abstract

Finding the universal artificial intelligent agent is the old dream of AI scientists. Solomonoff Induction was one big step towards this, giving a universal solution to the general problem of sequence prediction by defining a universal prior distribution. Hutter defined the AIXI model, which extends the latter to the reinforcement learning framework, where almost all if not all AI problems can be formulated. However, new difficulties arise because the agent is now active, whereas it is only passive in the sequence prediction case. This makes proving AIXI's optimality difficult. In fact, we prove that the current definition of AIXI can sometimes be suboptimal in a certain sense, but that this behavior is still the most rational one, hence emphasizing the difficulty of universal reinforcement learning.