Influence of selection on structure learning in markov network EDAs: an empirical study
Proceedings of the 14th annual conference on Genetic and evolutionary computation
Transfer learning, soft distance-based bias, and the hierarchical BOA
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part I
A niching scheme for EDAs to reduce spurious dependencies
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Evolutionary algorithms (EAs) are particularly suited to solve problems for which there is not much information available. From this standpoint, estimation of distribution algorithms (EDAs), which guide the search by using probabilistic models of the population, have brought a new view to evolutionary computation. While solving a given problem with an EDA, the user has access to a set of models that reveal probabilistic dependencies between variables, an important source of information about the problem. However, as the complexity of the used models increases, the chance of overfitting and consequently reducing model interpretability, increases as well. This paper investigates the relationship between the probabilistic models learned by the Bayesian optimization algorithm (BOA) and the underlying problem structure. The purpose of the paper is threefold. First, model building in BOA is analyzed to understand how the problem structure is learned. Second, it is shown how the selection operator can lead to model overfitting in Bayesian EDAs. Third, the scoring metric that guides the search for an adequate model structure is modified to take into account the non-uniform distribution of the mating pool generated by tournament selection. Overall, this paper makes a contribution towards understanding and improving model accuracy in BOA, providing more interpretable models to assist efficiency enhancement techniques and human researchers.