Probability Matching, the Magnitude of Reinforcement, and Classifier System Bidding
Machine Learning - Special issue on genetic algorithms
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Adapting Operator Probabilities in Genetic Algorithms
Proceedings of the 3rd International Conference on Genetic Algorithms
Proceedings of the 6th International Conference on Genetic Algorithms
A Racing Algorithm for Configuring Metaheuristics
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
The Royal Road Functions: Description, Intent and Experimentation
Selected Papers from AISB Workshop on Evolutionary Computing
Introduction to Evolutionary Computing
Introduction to Evolutionary Computing
An adaptive pursuit strategy for allocating operator probabilities
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Evolutionary Computation
Use of statistical outlier detection method in adaptive evolutionary algorithms
Proceedings of the 8th annual conference on Genetic and evolutionary computation
A description of holland's royal road function
Evolutionary Computation
Adapting operator settings in genetic algorithms
Evolutionary Computation
Adaptive operator selection with dynamic multi-armed bandits
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Extreme Value Based Adaptive Operator Selection
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
A Compass to Guide Genetic Algorithms
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Parameter Setting in Evolutionary Algorithms
Parameter Setting in Evolutionary Algorithms
Analysis of adaptive operator selection techniques on the royal road and long k-path problems
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Relevance estimation and value calibration of evolutionary algorithm parameters
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Evolutionary Computation in Practice
Evolutionary Computation in Practice
Extreme compass and dynamic multi-armed bandits for adaptive operator selection
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Learning and Intelligent Optimization: Third International Conference, LION 3, Trento, Italy, January 14-18, 2009. Selected Papers
Learning and Intelligent Optimization
Autonomous operator management for evolutionary algorithms
Journal of Heuristics
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation
Memetic algorithm with strategic controller for the maximum clique problem
Proceedings of the 2011 ACM Symposium on Applied Computing
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Editorial for the special issue on automated design and assessment of heuristic search methods
Evolutionary Computation
Estimating meme fitness in adaptive memetic algorithms for combinatorial problems
Evolutionary Computation
Hyperparameter tuning in bandit-based adaptive operator selection
EvoApplications'12 Proceedings of the 2012t European conference on Applications of Evolutionary Computation
Are state-of-the-art fine-tuning algorithms able to detect a dummy parameter?
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Hi-index | 0.00 |
Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-armed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration trade-off in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the well-known Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyper-parameters, and to propose a sound comparison of their performances.