Optimization of control parameters for genetic algorithms
IEEE Transactions on Systems, Man and Cybernetics
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
Proceedings of the 3rd International Conference on Genetic Algorithms
Extending Population-Based Incremental Learning to Continuous Search Spaces
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Expanding from Discrete to Continuous Estimation of Distribution Algorithms: The IDEA
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
An analysis of the behavior of a class of genetic adaptive systems.
An analysis of the behavior of a class of genetic adaptive systems.
On the importance of diversity maintenance in estimation of distribution algorithms
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
Adaptive variance scaling in continuous multi-objective estimation-of-distribution algorithms
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Cross entropy and adaptive variance scaling in continuous EDA
Proceedings of the 9th annual conference on Genetic and evolutionary computation
A derandomized approach to self-adaptation of evolution strategies
Evolutionary Computation
Premature Convergence in Constrained Continuous Search Spaces
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Enhancing the Performance of Maximum---Likelihood Gaussian EDAs Using Anticipated Mean Shift
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
A memory efficient and continuous-valued compact EDA for large scale problems
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Estimation of distribution algorithms (EDAs) are derivativefree optimization approaches based on the successive estimation of the probability density function of the best solutions, and their subsequent sampling. It turns out that the success of EDAs in numerical optimization strongly depends on scaling of the variance. The contribution of this paper is a comparison of various adaptive and self-adaptive variance scaling techniques for a Gaussian EDA. The analysis includes: (1) the Gaussian EDA without scaling, but different selection pressures and population sizes, (2) the variance adaptation technique known as Silverman's rule-of-thumb, (3) σ-self-adaptation known from evolution strategies, and (4) transformation of the solution space by estimation of the Hessian. We discuss the results for the sphere function, and its constrained counterpart.