Learning classifier system with average reward reinforcement learning

  • Authors:
  • Zhaoxiang Zang;Dehua Li;Junying Wang;Dan Xia

  • Affiliations:
  • Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan Hubei 430074, China;Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan Hubei 430074, China;College of Computer and Information Technology, China Three Gorges University, Yichang Hubei 443000, China;Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan Hubei 430074, China

  • Venue:
  • Knowledge-Based Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the family of Learning Classifier Systems, the classifier system XCS is most widely used and investigated. However, the standard XCS has difficulties solving large multi-step problems, where long action chains are needed to get delayed rewards. Up to the present, the reinforcement learning technique in XCS has been based on Q-learning, which optimizes the discounted total reward received by an agent but tends to limit the length of action chains. However, there are some undiscounted reinforcement learning methods available, such as R-learning and average reward reinforcement learning in general, which optimize the average reward per time step. In this paper, R-learning is used as the reinforcement learning employed by XCS, to replace Q-learning. The modification results in a classifier system that is rapid and able to solve large maze problems. In addition, it produces uniformly spaced payoff levels, which can support long action chains and thus effectively prevent the occurrence of overgeneralization.