Information-based objective functions for active data selection
Neural Computation
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
The nature of statistical learning theory
The nature of statistical learning theory
Linear Optimal Control Systems
Linear Optimal Control Systems
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Selective Sampling for Nearest Neighbor Classifiers
Machine Learning
Online Choice of Active Learning Algorithms
The Journal of Machine Learning Research
Parameter space exploration with Gaussian process trees
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning and evaluating classifiers under sample selection bias
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Near-optimal sensor placements in Gaussian processes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Active sampling for detecting irrelevant features
ICML '06 Proceedings of the 23rd international conference on Machine learning
Bayesian active learning for sensitivity analysis
ECML'06 Proceedings of the 17th European conference on Machine Learning
Globally Optimal Multi-agent Reinforcement Learning Parameters in Distributed Task Assignment
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
IEEE Transactions on Evolutionary Computation - Special issue on preference-based multiobjective evolutionary algorithms
Hi-index | 0.00 |
Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires kd simulation trials (translating into kd CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.