Probability Matching, the Magnitude of Reinforcement, and Classifier System Bidding
Machine Learning - Special issue on genetic algorithms
Genetic programming: on the programming of computers by means of natural selection
Genetic programming: on the programming of computers by means of natural selection
Genetic programming II: automatic discovery of reusable programs
Genetic programming II: automatic discovery of reusable programs
Numerical Optimization of Computer Models
Numerical Optimization of Computer Models
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
An adaptive pursuit strategy for allocating operator probabilities
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Adapting operator settings in genetic algorithms
Evolutionary Computation
Adaptive operator selection with dynamic multi-armed bandits
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Parameter Setting in Evolutionary Algorithms
Parameter Setting in Evolutionary Algorithms
Analysis of adaptive operator selection techniques on the royal road and long k-path problems
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Learning and Intelligent Optimization
Operator self-adaptation in genetic programming
EuroGP'11 Proceedings of the 14th European conference on Genetic programming
Analysing the effects of diverse operators in a genetic programming system
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Hi-index | 0.00 |
Operator adaptation in evolutionary computation has previously been applied to either small numbers of operators, or larger numbers of fairly similar ones. This paper focuses on adaptation in algorithms offering a diverse range of operators. We compare a number of previously-developed adaptation strategies, together with two that have been specifically designed for this situation. Probability Matching and Adaptive Pursuit methods performed reasonably well in this scenario, but a strategy combining aspects of both performed better. Multi-Arm Bandit techniques performed well when parameter settings were suitably tailored to the problem, but this tailoring was difficult, and performance was very brittle when the parameter settings were varied.