DXCS: an XCS system for distributed data mining
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Be real! XCS with continuous-valued inputs
GECCO '05 Proceedings of the 7th annual workshop on Genetic and evolutionary computation
Collective behavior based hierarchical XCS
Proceedings of the 9th annual conference companion on Genetic and evolutionary computation
Classifier fitness based on accuracy
Evolutionary Computation
Measurement and control of self-organised behaviour in robot swarms
ARCS'07 Proceedings of the 20th international conference on Architecture of computing systems
CoXCS: A Coevolutionary Learning Classifier Based on Feature Space Partitioning
AI '09 Proceedings of the 22nd Australasian Joint Conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
Learning Classifier Systems (LCSs) are rule-based evolutionary reinforcement learning (RL) systems. Today, especially variants of Wilson's eXtended Classifier System (XCS) are widely applied for machine learning. Despite their widespread application, LCSs have drawbacks: The number of reinforcement cycles an LCS requires for learning largely depends on the complexity of the learning task. A straightforward way to reduce this complexity is to split the task into smaller sub-problems. Whenever this can be done, the performance should be improved significantly. In this paper, a nature-inspired multi-agent scenario is used to evaluate and compare different distributed LCS variants. Results show that improvements in learning speed can be achieved by cleverly dividing a problem into smaller learning sub-problems.