Learning classifier system equivalent with reinforcement learning with function approximation

  • Authors:
  • Atsushi Wada;Keiki Takadama;Katsunori Shimohara

  • Affiliations:
  • ATR NIS, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan;Tokyo Institute of Technology, Nagatsuta-cho, Midori-ku, Kanagawa, Japan;ATR NIS, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan

  • Venue:
  • GECCO '05 Proceedings of the 7th annual workshop on Genetic and evolutionary computation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an experimental comparison of the reinforcement process between Learning Classifier System (LCS) and Reinforcement Learning (RL) with function approximation (FA) method, regarding their generalization mechanisms. To validate our previous theoretical analysis that derived equivalence of reinforcement process between LCS and RL, we introduce a simple test environment named Gridworld, which can be applied to both LCS and RL with three different classes of generalization: (1) tabular representation; (2) state aggregation; and (3) linear approximation. From the simulation experiments comparing LCS with its GA-inactivated and corresponding RL method, all the cases regarding the class of generalization showed identical results with the criteria of performance and temporal difference (TD) error, thereby verifying the equivalence predicted from the theory.