Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A tutorial on learning with Bayesian networks
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Learning in graphical models
Evolutionary algorithms: from recombination to search distributions
Theoretical aspects of evolutionary computing
Schemata, Distributions and Graphical Models in Evolutionary Optimization
Journal of Heuristics
From Recombination of Genes to the Estimation of Distributions I. Binary Parameters
PPSN IV Proceedings of the 4th International Conference on Parallel Problem Solving from Nature
The equation for response to selection and its use for prediction
Evolutionary Computation
Fda -a scalable evolutionary algorithm for the optimization of additively decomposed functions
Evolutionary Computation
Addressing sampling errors and diversity loss in UMDA
Proceedings of the 9th annual conference on Genetic and evolutionary computation
UMDAs for dynamic optimization problems
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Structure learning and optimisation in a Markov-network based estimation of distribution algorithm
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Tracking property of UMDA in dynamic environment by landscape framework
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.00 |
UMDA (the univariate marginal distribution algorithm) was derived by analyzing the mathematical principles behind recombination. Mutation, however, was not considered. The same is true for the FDA (factorized distribution algorithm), an extension of the UMDA which can cover dependencies between variables. In this paper mutation is introduced into these algorithms by a technique called Bayesian prior. We derive theoretically an estimate how to choose the Bayesian prior. The recommended Bayesian prior turns out to be a good choice in a number of experiments. These experiments also indicate that mutation increases in many cases the performance of the algorithms and decreases the dependence on a good choice of the population size.