Adapting operator probabilities in genetic algorithms
Proceedings of the third international conference on Genetic algorithms
Probability Matching, the Magnitude of Reinforcement, and Classifier System Bidding
Machine Learning - Special issue on genetic algorithms
Finite-time Analysis of the Multiarmed Bandit Problem
Machine Learning
Proceedings of the 6th International Conference on Genetic Algorithms
A Racing Algorithm for Configuring Metaheuristics
GECCO '02 Proceedings of the Genetic and Evolutionary Computation Conference
An adaptive pursuit strategy for allocating operator probabilities
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Use of statistical outlier detection method in adaptive evolutionary algorithms
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Comparison-based algorithms are robust and randomized algorithms are anytime
Evolutionary Computation
An overview of evolutionary algorithms for parameter optimization
Evolutionary Computation
Adapting operator settings in genetic algorithms
Evolutionary Computation
Adaptive operator selection with dynamic multi-armed bandits
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Analysis of adaptive operator selection techniques on the royal road and long k-path problems
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Extreme compass and dynamic multi-armed bandits for adaptive operator selection
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Learning and Intelligent Optimization
Autonomous operator management for evolutionary algorithms
Journal of Heuristics
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation
Proceedings of the 12th annual conference companion on Genetic and evolutionary computation
Comparison-based adaptive strategy selection with bandits in differential evolution
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
DAMS: distributed adaptive metaheuristic selection
Proceedings of the 13th annual conference on Genetic and evolutionary computation
The road to VEGAS: guiding the search over neutral networks
Proceedings of the 13th annual conference on Genetic and evolutionary computation
JADE, an adaptive differential evolution algorithm, benchmarked on the BBOB noiseless testbed
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Adaptive operator selection at the hyper-level
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part II
Sustainable cooperative coevolution with a multi-armed bandit
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Adaptive Operator Selection (AOS) turns the impacts of the applications of variation operators into Operator Selection through a Credit Assignment mechanism. However, most Credit Assignment schemes make direct use of the fitness gain between parent and offspring. A first issue is that the Operator Selection technique that uses such kind of Credit Assignment is likely to be highly dependent on the a priori unknown bounds of the fitness function. Additionally, these bounds are likely to change along evolution, as fitness gains tend to get smaller as convergence occurs. Furthermore, and maybe more importantly, a fitness-based credit assignment forbid any invariance by monotonous transformation of the fitness, what is a usual source of robustness for comparison-based Evolutionary Algorithms. In this context, this paper proposes two new Credit Assignment mechanisms, one inspired by the Area Under the Curve paradigm, and the other close to the Sum of Ranks. Using fitness improvement as raw reward, and directly coupled to a Multi-Armed Bandit Operator Selection Rule, the resulting AOS obtain very good performances on both the OneMax problem and some artificial scenarios, while demonstrating their robustness with respect to hyper-parameter and fitness transformations. Furthermore, using fitness ranks as raw reward results in a fully comparison-based AOS with reasonable performances.