Improving XCS Performance by Distribution

  • Authors:
  • Urban Richter;Holger Prothmann;Hartmut Schmeck

  • Affiliations:
  • Karlsruhe Institute of Technology --- Institute AIFB, Karlsruhe, Germany 76128;Karlsruhe Institute of Technology --- Institute AIFB, Karlsruhe, Germany 76128;Karlsruhe Institute of Technology --- Institute AIFB, Karlsruhe, Germany 76128

  • Venue:
  • SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning Classifier Systems (LCSs) are rule-based evolutionary reinforcement learning (RL) systems. Today, especially variants of Wilson's eXtended Classifier System (XCS) are widely applied for machine learning. Despite their widespread application, LCSs have drawbacks: The number of reinforcement cycles an LCS requires for learning largely depends on the complexity of the learning task. A straightforward way to reduce this complexity is to split the task into smaller sub-problems. Whenever this can be done, the performance should be improved significantly. In this paper, a nature-inspired multi-agent scenario is used to evaluate and compare different distributed LCS variants. Results show that improvements in learning speed can be achieved by cleverly dividing a problem into smaller learning sub-problems.