Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Schemata, Distributions and Graphical Models in Evolutionary Optimization
Journal of Heuristics
Extending Population-Based Incremental Learning to Continuous Search Spaces
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
From Recombination of Genes to the Estimation of Distributions I. Binary Parameters
PPSN IV Proceedings of the 4th International Conference on Parallel Problem Solving from Nature
Dependency networks for inference, collaborative filtering, and data visualization
The Journal of Machine Learning Research
A well-conditioned estimator for large-dimensional covariance matrices
Journal of Multivariate Analysis
Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms (Studies in Fuzziness and Soft Computing)
Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence)
Side chain placement using estimation of distribution algorithms
Artificial Intelligence in Medicine
Adaptive variance scaling in continuous multi-objective estimation-of-distribution algorithms
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Unified eigen analysis on multivariate Gaussian based estimation of distribution algorithms
Information Sciences: an International Journal
Preventing Premature Convergence in a Simple EDA Via Global Step Size Setting
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Convergence analysis of UMDAC with finite populations: a case study on flat landscapes
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Learning graphical model structure using L1-regularization paths
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Effective structure learning for EDA via L1-regularizedbayesian networks
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Learning an L1-regularized Gaussian Bayesian network in the equivalence class space
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Estimation of distribution algorithm for permutation flow shops with total flowtime minimization
Computers and Industrial Engineering
Computers and Operations Research
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
The gaussian polytree EDA for global optimization
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation
Theory refinement on Bayesian networks
UAI'91 Proceedings of the Seventh conference on Uncertainty in Artificial Intelligence
On the convergence of a class of estimation of distribution algorithms
IEEE Transactions on Evolutionary Computation
Protein Folding in Simplified Models With Estimation of Distribution Algorithms
IEEE Transactions on Evolutionary Computation
A review on probabilistic graphical models in evolutionary computation
Journal of Heuristics
Hi-index | 0.00 |
Regularization is a well-known technique in statistics for model estimation which is used to improve the generalization ability of the estimated model. Some of the regularization methods can also be used for variable selection that is especially useful in high-dimensional problems. This paper studies the use of regularized model learning in estimation of distribution algorithms (EDAs) for continuous optimization based on Gaussian distributions. We introduce two approaches to the regularized model estimation and analyze their effect on the accuracy and computational complexity of model learning in EDAs. We then apply the proposed algorithms to a number of continuous optimization functions and compare their results with other Gaussian distribution-based EDAs. The results show that the optimization performance of the proposed RegEDAs is less affected by the increase in the problem size than other EDAs, and they are able to obtain significantly better optimization values for many of the functions in high-dimensional settings.