Probability Matching, the Magnitude of Reinforcement, and Classifier System Bidding
Machine Learning - Special issue on genetic algorithms
Journal of Global Optimization
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
A Racing Algorithm for Configuring Metaheuristics
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
An adaptive pursuit strategy for allocating operator probabilities
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Differential evolution algorithm with strategy adaptation for global numerical optimization
IEEE Transactions on Evolutionary Computation
Extreme compass and dynamic multi-armed bandits for adaptive operator selection
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Adaptive strategy selection in differential evolution
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Toward comparison-based adaptive operator selection
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Autonomous operator management for evolutionary algorithms
Journal of Heuristics
JADE, an adaptive differential evolution algorithm, benchmarked on the BBOB noiseless testbed
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
The choice of which of the available strategies should be used within the Differential Evolution algorithm for a given problem is not trivial, besides being problem-dependent and very sensitive with relation to the algorithm performance. This decision can be made in an autonomous way, by the use of the Adaptive Strategy Selection paradigm, that continuously selects which strategy should be used for the next offspring generation, based on the performance achieved by each of the available ones on the current optimization process, i.e., while solving the problem. In this paper, we use the BBOB-2010 noiseless benchmarking suite to better empirically validate a comparison-based technique recently proposed to do so, the Fitness-based Area-Under-Curve Bandit [4], referred to as F-AUC-Bandit. It is compared with another recently proposed approach that uses Probability Matching technique based on the relative fitness improvements, referred to as PM-AdapSS-DE [7].