Strategies for Affect-Controlled Action-Selection in Soar-RL

  • Authors:
  • Eric Hogewoning;Joost Broekens;Jeroen Eggermont;Ernst G. Bovenkamp

  • Affiliations:
  • Leiden Institute of Advanced Computer Science, Leiden University, P.O. Box 9500, 2300 RA Leiden, The Netherlands;Leiden Institute of Advanced Computer Science, Leiden University, P.O. Box 9500, 2300 RA Leiden, The Netherlands;Leiden University Medical Center, Department of Radiology, Division of Image Processing, P.O. Box 9600, 2300 RC Leiden, The Netherlands;Leiden University Medical Center, Department of Radiology, Division of Image Processing, P.O. Box 9600, 2300 RC Leiden, The Netherlands

  • Venue:
  • IWINAC '07 Proceedings of the 2nd international work-conference on Nature Inspired Problem-Solving Methods in Knowledge Engineering: Interplay Between Natural and Artificial Computation, Part II
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning (RL) agents can benefit from adaptive exploration/exploitation behavior, especially in dynamic environments. We focus on regulating this exploration/exploitation behavior by controlling the action-selection mechanism of RL. Inspired by psychological studies which show that affect influences human decision making, we use artificial affect to influence an agent's action-selection. Two existing affective strategies are implemented and, in addition, a new hybrid method that combines both. These strategies are tested on `maze tasks' in which a RL agent has to find food (rewarded location) in a maze. We use Soar-RL, the new RL-enabled version of Soar, as a model environment. One task tests the ability to quickly adapt to an environmental change, while the other tests the ability to escape a local optimum in order to find the global optimum. We show that artificial affect-controlled action-selection in some cases helps agents to faster adapt to changes in the environment.