A formal framework and extensions for function approximation in learning classifier systems

  • Authors:
  • Jan Drugowitsch;Alwyn M. Barry

  • Affiliations:
  • Department of Brain and Cognitive Sciences, University of Rochester, Rochester, USA;Department of Computer Science, University of Bath, Bath, UK

  • Venue:
  • Machine Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning Classifier Systems (LCS) consist of the three components: function approximation, reinforcement learning, and classifier replacement. In this paper we formalize the function approximation part, by providing a clear problem definition, a formalization of the LCS function approximation architecture, and a definition of the function approximation aim. Additionally, we provide definitions of optimality and what conditions need to be fulfilled for a classifier to be optimal. As a demonstration of the usefulness of the framework, we derive commonly used algorithmic approaches that aim at reaching optimality from first principles, and introduce a new Kalman filter-based method that outperforms all currently implemented methods, in addition to providing further insight into the probabilistic basis of the localized model that a classifier provides. A global function approximation in LCS is achieved by combining the classifier's localized model, for which we provide a simplified approach when compared to current LCS, based on the Maximum Likelihood of a combination of all classifiers. The formalizations in this paper act as the foundation of a currently actively developed formal framework that includes all three LCS components, promising a better formal understanding of current LCS and the development of better LCS algorithms.