Communications of the ACM
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
Associative Reinforcement Learning: Functions in k-DNF
Machine Learning
Machine Learning
Expected Mistake Bound Model for On-Line Reinforcement Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
An Improved On-line Algorithm for Learning Linear Evaluation Functions
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Cost-Sensitive Learning by Cost-Proportionate Example Weighting
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Artificial Intelligence
Knows what it knows: a framework for self-aware learning
Proceedings of the 25th international conference on Machine learning
The offset tree for learning with partial labels
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms
Proceedings of the fourth ACM international conference on Web search and data mining
Multi-armed bandits with episode context
Annals of Mathematics and Artificial Intelligence
Reusing historical interaction data for faster online learning to rank for IR
Proceedings of the sixth ACM international conference on Web search and data mining
Hi-index | 0.00 |
We formalize the associative bandit problem framework introduced by Kaelbling as a learning-theory problem. The learning environment is modeled as a k-armed bandit where arm payoffs are conditioned on an observable input selected on each trial. We show that, if the payoff functions are constrained to a known hypothesis class, learning can be performed efficiently with respect to the VC dimension of this class. We formally reduce the problem of PAC classification to the associative bandit problem, producing an efficient algorithm for any hypothesis class for which efficient classification algorithms are known. We demonstrate the approach empirically on a scalable concept class.