Soar-RL: integrating reinforcement learning with Soar

  • Authors:
  • Shelley Nason;John E. Laird

  • Affiliations:
  • University of Michigan, 1101 Beal Avenue, Ann Arbor, MI 48109-2110, USA;University of Michigan, 1101 Beal Avenue, Ann Arbor, MI 48109-2110, USA

  • Venue:
  • Cognitive Systems Research
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we describe an architectural modification to Soar that gives a Soar agent the opportunity to learn statistical information about the past success of its actions and utilize this information when selecting an operator. This mechanism serves the same purpose as production utilities in ACT-R, but the implementation is more directly tied to the standard definition of the reinforcement learning (RL) problem. The paper explains our implementation, gives a rationale for adding an RL capability to Soar, and shows results for Soar-RL agents' performance on two tasks.