Communications of the ACM
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Information Processing Letters
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
Prediction-preserving reducibility
Journal of Computer and System Sciences - 3rd Annual Conference on Structure in Complexity Theory, June 14–17, 1988
Learning boolean functions in an infinite attribute space
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Learning in the presence of finitely or infinitely many irrelevant attributes
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
On-line learning with an oblivious environment and the power of randomization
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Machine learning: a theoretical approach
Machine learning: a theoretical approach
The weighted majority algorithm
Information and Computation
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
On-line evaluation and prediction using linear functions
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Machine Learning
Machine Learning
Gambling in a rigged casino: The adversarial multi-armed bandit problem
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
On the complexity of learning from counterexamples
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
On the complexity of learning from counterexamples and membership queries
SFCS '90 Proceedings of the 31st Annual Symposium on Foundations of Computer Science
Separating distribution-free and mistake-bound learning models over the Boolean domain
SFCS '90 Proceedings of the 31st Annual Symposium on Foundations of Computer Science
On-line learning with linear loss constraints
Information and Computation
On-line learning with linear loss constraints
Information and Computation
Monitoring algorithms for negative feedback systems
Proceedings of the 19th international conference on World wide web
Partial monitoring with side information
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Toward a classification of finite partial-monitoring games
Theoretical Computer Science
Hi-index | 0.00 |
In the standard on-line model the learning algorithm tries to minimizethe total number of mistakes made in a series of trials. On each trial the learner sees an instance, makes a prediction of its classification, then finds out the correct classification. We define a natural variant of this model (''apple tasting'') whereu* the classes are interpreted as the good and bad instances, * the prediction is interpreted as accepting or rejecting the instance,and * the learner gets feedback only when the instance is accepted. We use two transformations to relate the apple tasting model to an enhanced standard model where false acceptances are counted separately from false rejections. We apply our results to obtain a good general-purpose apple tasting algorithm as well as nearly optimal apple tasting algorithms for a variety of standard classes, such as conjunctions and disjunctions of n boolean variables. We also present and analyze a simpler transformation useful when the instances are drawn at random rather than selected by an adversary.