Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
On the analysis of the (1+ 1) evolutionary algorithm
Theoretical Computer Science
Introduction to Algorithms
Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time (Natural Computing Series)
Parallel Metaheuristics: A New Class of Algorithms
Parallel Metaheuristics: A New Class of Algorithms
On the Choice of the Offspring Population Size in Evolutionary Algorithms
Evolutionary Computation
How mutation and selection solve long-path problems in polynomial expected time
Evolutionary Computation
The benefit of migration in parallel evolutionary algorithms
Proceedings of the 12th annual conference on Genetic and evolutionary computation
General lower bounds for the running time of evolutionary algorithms
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
General scheme for analyzing running times of parallel evolutionary algorithms
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part I
On the effectiveness of crossover for migration in parallel evolutionary algorithms
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Analysis of speedups in parallel evolutionary algorithms for combinatorial optimization
ISAAC'11 Proceedings of the 22nd international conference on Algorithms and Computation
Homogeneous and heterogeneous island models for the set cover problem
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
Hi-index | 0.00 |
We present two adaptive schemes for dynamically choosing the number of parallel instances in parallel evolutionary algorithms. This includes the choice of the offspring population size in a (1+λ) EA as a special case. Our schemes are parameterless and they work in a black-box setting where no knowledge on the problem is available. Both schemes double the number of instances in case a generation ends without finding an improvement. In a successful generation, the first scheme resets the system to one instance, while the second scheme halves the number of instances. Both schemes provide near-optimal speed-ups in terms of the parallel time. We give upper bounds for the asymptotic sequential time (i.e., the total number of function evaluations) that are not larger than upper bounds for a corresponding non-parallel algorithm derived by the fitness-level method.