Asymptotic Bayesian generalization error when training and test distributions are different
Proceedings of the 24th international conference on Machine learning
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Pool-Based Agnostic Experiment Design in Linear Regression
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Pool-based active learning in approximate linear regression
Machine Learning
Active Learning of Instance-Level Constraints for Semi-supervised Document Clustering
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 01
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Active learning for regression based on query by committee
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Semi-supervised speaker identification under covariate shift
Signal Processing
Adaptive linear models for regression: Improving prediction when population has changed
Pattern Recognition Letters
Generating sequential space-filling designs using genetic algorithms and Monte Carlo methods
SEAL'10 Proceedings of the 8th international conference on Simulated evolution and learning
A novel sequential design strategy for global surrogate modeling
Winter Simulation Conference
A Novel Hybrid Sequential Design Strategy for Global Surrogate Modeling of Computer Experiments
SIAM Journal on Scientific Computing
Importance-Weighted cross-validation for covariate shift
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Querying discriminative and representative samples for batch mode active learning
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Active learning for noisy oracle via density power divergence
Neural Networks
Hi-index | 0.00 |
The goal of active learning is to determine the locations of training input points so that the generalization error is minimized. We discuss the problem of active learning in linear regression scenarios. Traditional active learning methods using least-squares learning often assume that the model used for learning is correctly specified. In many practical situations, however, this assumption may not be fulfilled. Recently, active learning methods using "importance"-weighted least-squares learning have been proposed, which are shown to be robust against misspecification of models. In this paper, we propose a new active learning method also using the weighted least-squares learning, which we call ALICE (Active Learning using the Importance-weighted least-squares learning based on Conditional Expectation of the generalization error). An important difference from existing methods is that we predict the conditional expectation of the generalization error given training input points, while existing methods predict the full expectation of the generalization error. Due to this difference, the training input design can be fine-tuned depending on the realization of training input points. Theoretically, we prove that the proposed active learning criterion is a more accurate predictor of the single-trial generalization error than the existing criterion. Numerical studies with toy and benchmark data sets show that the proposed method compares favorably to existing methods.