Random number generation and quasi-Monte Carlo methods
Random number generation and quasi-Monte Carlo methods
Information-based objective functions for active data selection
Neural Computation
Screening, predicting, and computer experiments
Technometrics
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian process dynamic programming
Neurocomputing
Active learning for directed exploration of complex systems
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Visibility of Journals for Journal of Visualization
Journal of Visualization
Hi-index | 0.00 |
Designs of micro electro-mechanical devices need to be robust against fluctuations in mass production. Computer experiments with tens of parameters are used to explore the behavior of the system, and to compute sensitivity measures as expectations over the input distribution. Monte Carlo methods are a simple approach to estimate these integrals, but they are infeasible when the models are computationally expensive. Using a Gaussian processes prior, expensive simulation runs can be saved. This Bayesian quadrature allows for an active selection of inputs where the simulation promises to be most valuable, and the number of simulation runs can be reduced further. We present an active learning scheme for sensitivity analysis which is rigorously derived from the corresponding Bayesian expected loss. On three fully featured, high dimensional physical models of electro-mechanical sensors, we show that the learning rate in the active learning scheme is significantly better than for passive learning.