The Strength of Weak Learnability
Machine Learning
ALVINN: an autonomous land vehicle in a neural network
Advances in neural information processing systems 1
Elements of information theory
Elements of information theory
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Boosting in the presence of noise
Proceedings of the thirty-fifth annual ACM symposium on Theory of computing
Convex Optimization
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Tutorial on Practical Prediction Theory for Classification
The Journal of Machine Learning Research
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Posteriori, and Structural Risk Minimiziation frameworks typically make the assumption that the test data a learner is applied to is drawn from the same distribution as the training data. In various prominent applications of learning techniques, from robotics to medical diagnosis to process control, this assumption is violated. We consider a novel framework where a learner may influence the test distribution in a bounded way. From this framework, we derive an efficient algorithm that acts as a wrapper around a broad class of existing supervised learning algorithms while guarranteeing more robust behavior under changes in the input distribution.