Global optimization
Dynamic Programming and Optimal Control
Dynamic Programming and Optimal Control
Bayesian Algorithms for One-Dimensional GlobalOptimization
Journal of Global Optimization
Efficient Global Optimization of Expensive Black-Box Functions
Journal of Global Optimization
A Taxonomy of Global Optimization Methods Based on Response Surfaces
Journal of Global Optimization
Computer experiments and global optimization
Computer experiments and global optimization
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
IPSN '08 Proceedings of the 7th international conference on Information processing in sensor networks
Introduction to Derivative-Free Optimization
Introduction to Derivative-Free Optimization
Monte Carlo Strategies in Scientific Computing
Monte Carlo Strategies in Scientific Computing
Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms
Algorithmica - Including a Special Section on Genetic and Evolutionary Computation; Guest Editors: Benjamin Doerr, Frank Neumann and Ingo Wegener
Computational Intelligence in Optimization: Applications and Implementations
Computational Intelligence in Optimization: Applications and Implementations
Bayesian optimization using sequential monte carlo
LION'12 Proceedings of the 6th international conference on Learning and Intelligent Optimization
Hi-index | 0.00 |
We consider the problem of optimizing a real-valued continuous function f, which is supposed to be expensive to evaluate and, consequently, can only be evaluated a limited number of times. This article focuses on the Bayesian approach to this problem, which consists in combining evaluation results and prior information about f in order to efficiently select new evaluation points, as long as the budget for evaluations is not exhausted. The algorithm called efficient global optimization (EGO), proposed by Jones, Schonlau and Welch (J. Global Optim., 13(4):455–492, 1998), is one of the most popular Bayesian optimization algorithms. It is based on a sampling criterion called the expected improvement (EI), which assumes a Gaussian process prior about f. In the EGO algorithm, the parameters of the covariance of the Gaussian process are estimated from the evaluation results by maximum likelihood, and these parameters are then plugged in the EI sampling criterion. However, it is well-known that this plug-in strategy can lead to very disappointing results when the evaluation results do not carry enough information about f to estimate the parameters in a satisfactory manner. We advocate a fully Bayesian approach to this problem, and derive an analytical expression for the EI criterion in the case of Student predictive distributions. Numerical experiments show that the fully Bayesian approach makes EI-based optimization more robust while maintaining an average loss similar to that of the EGO algorithm.