Mathematical Programming: Series A and B
Choice of Basis for Laplace Approximation
Machine Learning
An Interior Point Algorithm for Large-Scale Nonlinear Programming
SIAM Journal on Optimization
Efficient Global Optimization of Expensive Black-Box Functions
Journal of Global Optimization
Convex Optimization
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
An interior algorithm for nonlinear optimization that combines line search and trust region steps
Mathematical Programming: Series A and B
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
An informational approach to the global optimization of expensive-to-evaluate functions
Journal of Global Optimization
Practical bayesian optimization
Practical bayesian optimization
Information Rates of Nonparametric Gaussian Process Methods
The Journal of Machine Learning Research
Expectation propagation for approximate Bayesian inference
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Model guided sampling optimization with gaussian processes for expensive black-box optimization
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation
Hi-index | 0.00 |
Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly addresses the decision problem of maximizing information gain from each evaluation.