Recent progress in unconstrained nonlinear optimization without derivatives
Mathematical Programming: Series A and B - Special issue: papers from ismp97, the 16th international symposium on mathematical programming, Lausanne EPFL
Efficient and Accurate Parallel Genetic Algorithms
Efficient and Accurate Parallel Genetic Algorithms
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Weighted multirecombination evolution strategies
Theoretical Computer Science - Foundations of genetic algorithms
Lower Bounds for Evolution Strategies Using VC-Dimension
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Covariance Matrix Adaptation Revisited --- The CMSA Evolution Strategy ---
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
On the Parallel Speed-Up of Estimation of Multivariate Normal Algorithm and Evolution Strategies
EvoWorkshops '09 Proceedings of the EvoWorkshops 2009 on Applications of Evolutionary Computing: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
Why one must use reweighting in estimation of distribution algorithms
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Cumulative step length adaptation for evolution strategies using negative recombination weights
Evo'08 Proceedings of the 2008 conference on Applications of evolutionary computing
General lower bounds for evolutionary algorithms
PPSN'06 Proceedings of the 9th international conference on Parallel Problem Solving from Nature
A rigorous runtime analysis for quasi-random restarts and decreasing stepsize
EA'11 Proceedings of the 10th international conference on Artificial Evolution
Hi-index | 0.00 |
It is usually considered that evolutionary algorithms are highly parallel. In fact, the theoretical speed-ups for parallel optimization are far better than empirical results; this suggests that evolutionary algorithms, for large numbers of processors, are not so efficient. In this paper, we show that in many cases automatic parallelization provably provides better results than the standard parallelization consisting of simply increasing the population size λ. A corollary of these results is that logarithmic bounds on the speed-up (as a function of the number of computing units) are tight within constant factors. Importantly, we propose a simple modification, termed log(λ)-correction, which strongly improves several important algorithms when λ is large.