Introduction to the theory of neural computation
Introduction to the theory of neural computation
Evolutionary computation: toward a new philosophy of machine intelligence
Evolutionary computation: toward a new philosophy of machine intelligence
Statistical machine learning and combinatorial optimization
Theoretical aspects of evolutionary computing
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation
A Survey of Optimization by Building and Using Probabilistic Models
Computational Optimization and Applications
Mean Shift, Mode Seeking, and Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Extending Population-Based Incremental Learning to Continuous Search Spaces
PPSN V Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning
Probabilistic modeling for continuous EDA with Boltzmann selection and Kullback-Leibeler divergence
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Cross entropy and adaptive variance scaling in continuous EDA
Proceedings of the 9th annual conference on Genetic and evolutionary computation
Unified eigen analysis on multivariate Gaussian based estimation of distribution algorithms
Information Sciences: an International Journal
Enhancing the Performance of Maximum---Likelihood Gaussian EDAs Using Anticipated Mean Shift
Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X
Estimation of distribution algorithm based on copula theory
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Computers and Operations Research
A Boltzmann based estimation of distribution algorithm
Information Sciences: an International Journal
Hi-index | 0.00 |
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.