Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective

  • Authors:
  • S. Singh;R. L. Lewis;A. G. Barto;J. Sorg

  • Affiliations:
  • Div. of Comput. Sci. & Eng., Univ. of Michigan, Ann Arbor, MI, USA;-;-;-

  • Venue:
  • IEEE Transactions on Autonomous Mental Development
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

There is great interest in building intrinsic motivation into artificial systems using the reinforcement learning framework. Yet, what intrinsic motivation may mean computationally, and how it may differ from extrinsic motivation, remains a murky and controversial subject. In this paper, we adopt an evolutionary perspective and define a new optimal reward framework that captures the pressure to design good primary reward functions that lead to evolutionary success across environments. The results of two computational experiments show that optimal primary reward signals may yield both emergent intrinsic and extrinsic motivation. The evolutionary perspective and the associated optimal reward framework thus lead to the conclusion that there are no hard and fast features distinguishing intrinsic and extrinsic reward computationally. Rather, the directness of the relationship between rewarding behavior and evolutionary success varies along a continuum.