An Empirical Analysis of the Impact of Prioritised Sweeping on the DynaQ's Performance

  • Authors:
  • Marek Grześ;Daniel Kudenko

  • Affiliations:
  • Department of Computer Science, University of York, York, UK YO10 5DD;Department of Computer Science, University of York, York, UK YO10 5DD

  • Venue:
  • ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning tackles the problem of how to act optimally given observations of the current world state. Agents that learn from reinforcements execute actions in an environment and receive feedback (reward) that can be used to guide the learning process. The distinguishing feature of reinforcement learning is that the model of the environment (i.e., effects of actions or the reward function) are not known in advance. Model-based approaches represent a class of reinforcement learning algorithms which learn the model of dynamics. This model can be used by the learning agent to simulate interactions with the environment. DynaQ and its extended version with prioritised sweeping are the most popular examples of model-based approaches. This paper shows that, contrary to common belief, DynaQ with prioritised sweeping may perform worse than pure DynaQ in domains where the agent can be easily misled by a sub-optimal solution.