Information-based objective functions for active data selection
Neural Computation
Neural network exploration using optimal experiment design
Neural Networks
Efficient Global Optimization of Expensive Black-Box Functions
Journal of Global Optimization
A Taxonomy of Global Optimization Methods Based on Response Surfaces
Journal of Global Optimization
Gaussian Process Regression: Active Data Selection and Test Point Rejection
Mustererkennung 2000, 22. DAGM-Symposium
Sequential design of computer experiments to minimize integrated response functions
Sequential design of computer experiments to minimize integrated response functions
Designing computer experiments to estimate integrated response functions
Designing computer experiments to estimate integrated response functions
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Using Gaussian Processes to Optimize Expensive Functions
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Automatic gait optimization with Gaussian process regression
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Practical bayesian optimization
Practical bayesian optimization
A novel sequential design strategy for global surrogate modeling
Winter Simulation Conference
Efficiently learning the preferences of people
Machine Learning
Hi-index | 0.00 |
In the last decades enormous advances have been made possible for modelling complex (physical) systems by mathematical equations and computer algorithms. To deal with very long running times of such models a promising approach has been to replace them by stochastic approximations based on a few model evaluations. In this paper we focus on the often occuring case that the system modelled has two types of inputs x = (xc, xe) with xc representing control variables and xe representing environmental variables. Typically, xc needs to be optimised, whereas xe are uncontrollable but are assumed to adhere to some distribution. In this paper we use a Bayesian approach to address this problem: we specify a prior distribution on the underlying function using a Gaussian process and use Bayesian Monte Carlo to obtain the objective function by integrating out environmental variables. Furthermore, we empirically evaluate several active learning criteria that were developed for the deterministic case (i.e., no environmental variables) and show that the ALC criterion appears significantly better than expected improvement and random selection.