The nature of statistical learning theory
The nature of statistical learning theory
Multi-Objective Optimization Using Evolutionary Algorithms
Multi-Objective Optimization Using Evolutionary Algorithms
Evolutionary Optimization in Dynamic Environments
Evolutionary Optimization in Dynamic Environments
Extending Population-Based Incremental Learning to Continuous Search Spaces
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms (Studies in Fuzziness and Soft Computing)
Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence)
iBOA: the incremental bayesian optimization algorithm
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Improving the efficiency of the extended compact genetic algorithm
Proceedings of the 10th annual conference on Genetic and evolutionary computation
The balance between proximity and diversity in multiobjective evolutionary algorithms
IEEE Transactions on Evolutionary Computation
AMaLGaM IDEAs in noiseless black-box optimization benchmarking
Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Estimation of distribution algorithms: from available implementations to potential developments
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Adaptive Strategies for Dynamic Pricing Agents
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Towards large scale continuous EDA: a random matrix theory perspective
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Benchmarking parameter-free amalgam on functions with and without noise
Evolutionary Computation
Hi-index | 0.00 |
Often, Estimation-of-Distribution Algorithms (EDAs) are praised for their ability to optimize a broad class of problems. EDA applications are however still limited. Often heard criticism is that a large population size is required and that distribution estimation takes long. Here we look at possibilities for improvements in these areas. We first discuss the use of a memory to aggregate information over multiple generations and reduce the population size. The approach we take, empirical risk minimization to perform non-linear regression of memory parameters, may well be generalizable to other EDAs. We design a memory this way for a Gaussian EDA and observe smaller population size requirements and fewer evaluations used. We also speed up the selection of Bayesian factorizations for Gaussian EDAs by sorting the entries in the covariance matrix. Finally, we discuss parameter-free Gaussian EDAs for real-valued single-objective optimization. We propose to not only increase the population size in subsequent runs, but to also divide it over parallel runs across the search space. On some multimodal problems improvements are thereby obtained.