Reinforcement Learning with Approximation Spaces

  • Authors:
  • James F. Peters;Christopher Henry

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba R3T 5V6, Canada. E-mail: {jfpeters,chenry}@ee.umanitoba.ca;Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba R3T 5V6, Canada. E-mail: {jfpeters,chenry}@ee.umanitoba.ca

  • Venue:
  • Fundamenta Informaticae
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a rough set approach to reinforcement learning by swarms of cooperating agents. The problem considered in this paper is how to guide reinforcement learning based on knowledge of acceptable behavior patterns. This is made possible by considering behavior patterns of swarms in the context of approximation spaces. Rough set theory introduced by Zdzisław Pawlak in the early 1980s provides a ground for deriving pattern-based rewards within approximation spaces. Both conventional and approximation space-based forms of reinforcement comparison and the actor-critic method as well as two forms of the off-policy Monte Carlo learning control method are investigated in this article. The study of swarm behavior by collections of biologically-inspired bots is carried out in the context of an artificial ecosystem testbed. This ecosystem has an ethological basis that makes it possible to observe and explain the behavior of biological organisms that carries over into the study of reinforcement learning by interacting robotic devices. The results of ecosystem experiments with six forms of reinforcement learning are given. The contribution of this article is the presentation of several viable alternatives to conventional reinforcement learning methods defined in the context of approximation spaces.