Selection and Reinforcement Learning for Combinatorial Optimization
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
A Shepherd and a Sheepdog to Guide Evolutionary Computation?
AE '99 Selected Papers from the 4th European Conference on Artificial Evolution
Avoiding the Bloat with Stochastic Grammar-Based Genetic Programming
Selected Papers from the 5th European Conference on Artificial Evolution
The Importance of Selection Mechanisms in Distribution Estimation Algorithms
Selected Papers from the 5th European Conference on Artificial Evolution
Analyzing the (1, λ) evolution strategy via stochastic approximation methods
Evolutionary Computation
An annealing framework with learning memory
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
The aim of this paper is to extend selection learning, initially designed for the optimization of real functions over fixed-length binary strings, toward fixed-length strings on an arbitrary finite alphabet. We derive selection learning algorithms from clear principles. First, we are looking for product probability measures over d-ary strings, or equivalently, random variables whose components are statistically independent. Second, these distributions are evaluated relatively to the expectation of the fitness function. More precisely, we consider the logarithm of the expectation to introduce fitness proportional and Boltzmann selections. Third, we define two kinds of gradient systems to maximize the expectation. The first one drives unbounded parameters, whereas the second one directly drives probabilities, 脿 la PBIL. We also introduce composite selection, that is algorithms which take into account positively as well as negatively selected strings. We propose stochastic approximations for the gradient systems, and finally, we apply three of the resulting algorithms to two test functions, OneMax and BigJump, and draw some conclusions on their relative strengths and weaknesses.