Using Genetic Algorithms for Concept Learning
Machine Learning - Special issue on genetic algorithms
Randomized algorithms
Machine Learning
A comprehensive survey of fitness approximation in evolutionary computation
Soft Computing - A Fusion of Foundations, Methodologies and Applications
ALPS: the age-layered population structure for reducing the problem of premature convergence
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Learning classifier systems: a complete introduction, review, and roadmap
Journal of Artificial Evolution and Applications
Guarding against premature convergence while accelerating evolutionary search
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Speeding up the evaluation of evolutionary learning systems using GPGPUs
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Random artificial incorporation of noise in a learning classifier system environment
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Artificial Intelligence in Medicine
Post-processing operators for decision lists
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Journal of Medical Systems
Training Data Subdivision and Periodical Rotation in Hybrid Fuzzy Genetics-Based Machine Learning
ICMLA '11 Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops - Volume 01
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Pool vs. Island Based Evolutionary Algorithms: An Initial Exploration
3PGCIC '12 Proceedings of the 2012 Seventh International Conference on P2P, Parallel, Grid, Cloud and Internet Computing
Hi-index | 0.00 |
We define a machine learning problem to forecast arterial blood pressure. Our goal is to solve this problem with a large scale learning classifier system. Because learning classifiers systems are extremely computationally intensive and this problem's eventually large training set will be very costly to execute, we address how to use less of the training set while not negatively impacting learning accuracy. Our approach is to allow competition among solutions which have not been evaluated on the entire training set. The best of these solutions are then evaluated on more of the training set while their offspring start off being evaluated on less of the training set. To keep selection fair, we divide competing solutions according to how many training examples they have been tested on.