Bayesian Monte Carlo for the Global Optimization of Expensive Functions
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Incorporating domain models into Bayesian optimization for RL
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
A Bayesian interactive optimization approach to procedural animation design
Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Hierarchical Knowledge Gradient for Sequential Sampling
The Journal of Machine Learning Research
An experimental methodology for response surface optimization methods
Journal of Global Optimization
Entropy search for information-efficient global optimization
The Journal of Machine Learning Research
Self-Avoiding Random Dynamics on Integer Complex Systems
ACM Transactions on Modeling and Computer Simulation (TOMACS) - Special Issue on Monte Carlo Methods in Statistics
Bayesian optimization in high dimensions via random embeddings
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Global optimization of non-convex functions over real vector spaces is a problem of widespread theoretical and practical interest. In the past fifty years, research in global optimization has produced many important approaches including Lipschitz optimization, simulated annealing, homotopy methods, genetic algorithms, and Bayesian response-surface methods. This work examines the last of these approaches. The Bayesian response-surface approach to global optimization maintains a posterior model of the function being optimized by combining a prior over functions with accumulating function evaluations. The model is then used to compute which point the method should acquire next in its search for the optimum of the function. Bayesian methods can be some of the most efficient approaches to optimization in terms of the number of function evaluations required, but they have significant drawbacks: Current approaches are needlessly data-inefficient, approximations to the Bayes-optimal acquisition criterion are poorly studied, and current approaches do not take advantage of the small-scale properties of differentiable functions near local optima. This work addresses each of these problems to make Bayesian methods more widely applicable.