A general lower bound on the number of examples needed for learning
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Average case analysis of k-CNF and k-DNF learning algorithms
Proceedings of the workshop on Computational learning theory and natural learning systems (vol. 2) : intersections between theory and experiment: intersections between theory and experiment
Computational Complexity of Machine Learning
Computational Complexity of Machine Learning
SIA: A Supervised Inductive Algorithm with Genetic Search for Learning Attributes based Concepts
ECML '93 Proceedings of the European Conference on Machine Learning
ON LEARNING kDNF^s_n BOOLEAN FORMULAS
EH '01 Proceedings of the The 3rd NASA/DoD Workshop on Evolvable Hardware
Studying XCS/BOA learning in Boolean functions: structure encoding and random Boolean functions
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Classifier fitness based on accuracy
Evolutionary Computation
Evolutionary rule-based systems for imbalanced data sets
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Evolutionary and Metaheuristics based Data Mining (EMBDM); Guest Editors: José A. Gámez, María J. del Jesús, José M. Puerta
A mixed discrete-continuous attribute list representation for large scale classification domains
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Facetwise analysis of XCS for problems with class imbalances
IEEE Transactions on Evolutionary Computation
Improving the performance of a pittsburgh learning classifier system using a default rule
IWLCS'03-05 Proceedings of the 2003-2005 international conference on Learning classifier systems
Hi-index | 0.01 |
In this work we present an exhaustive empirical analysis of the Pittsburgh-style BioHEL system using a broad set of variants of the well-known k-DNF boolean function. These functions present a broad set of possible challenges for most machine learning techniques such as varying degrees of rule specificity, class unbalance and niche overlap. Moreover, as the ideal solutions are known, one can easily assess if a learning system is able to find them, and how fast. Specifically, we study two aspects of BioHEL: its sensitivity to the coverage breakpoint parameter (that determines the degree of generality pressure applied by the fitness function) and the default rule policy. The results show that BioHEL is highly sensitive to the choice of coverage breakpoint (as was expected) and that using a suitable (known beforehand) default class allows the system to learn faster than using a majority class policy. Moreover, the experiments indicate that BioHEL scalability depends directly on both k (the specificity of the rules) and the number of DNF terms in the problem.