Evolution of reward functions for reinforcement learning
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Emotion-based intrinsic motivation for reinforcement learning agents
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Intrinsically motivated exploration via intrinsic value calculation
Proceedings of the 50th Annual Southeast Regional Conference
Strong mitigation: nesting search for good policies within search for good reward
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Active learning of inverse models with intrinsically motivated goal exploration in robots
Robotics and Autonomous Systems
Hi-index | 0.00 |
There is great interest in building intrinsic motivation into artificial systems using the reinforcement learning framework. Yet, what intrinsic motivation may mean computationally, and how it may differ from extrinsic motivation, remains a murky and controversial subject. In this paper, we adopt an evolutionary perspective and define a new optimal reward framework that captures the pressure to design good primary reward functions that lead to evolutionary success across environments. The results of two computational experiments show that optimal primary reward signals may yield both emergent intrinsic and extrinsic motivation. The evolutionary perspective and the associated optimal reward framework thus lead to the conclusion that there are no hard and fast features distinguishing intrinsic and extrinsic reward computationally. Rather, the directness of the relationship between rewarding behavior and evolutionary success varies along a continuum.