Machine Learning
Strength or Accuracy: Credit Assignment in Learning Classifier Systems
Strength or Accuracy: Credit Assignment in Learning Classifier Systems
UCSpv: principled voting in UCS rule populations
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Toward a better understanding of rule initialisation and deletion
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
UCSpv: principled voting in UCS rule populations
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Learning Classifier Systems: Looking Back and Glimpsing Ahead
Learning Classifier Systems
Modeling UCS as a mixture of experts
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Accuracy exponentiation in UCS and its effect on voting margins
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Learning Classifier Systems differ from many other classification techniques, in that new rules are constantly discovered and evaluated. This feature of LCS gives rise to an important problem, how to deal with estimates of rule accuracy that are unreliable due to the small number of performance samples available. In this paper we highlight the importance of this problem for LCS, summarise previous heuristic approaches to the problem, and propose instead the use of principles from Bayesian estimation. In particular we argue that discounting estimates of accuracy based on inexperience must be recognised as a crucially important part of the specification of LCS, and must be well motivated. We present experimental results on using the Bayesian approach to discounting, consider how to estimate the parameters for it, and identify benefits of its use for other areas of LCS.