On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
COLT '90 Proceedings of the third annual workshop on Computational learning theory
The Convergence of TD(λ) for General λ
Machine Learning
Lower Bound Methods and Separation Results for On-Line Learning Models
Machine Learning - Computational learning theory
Universal forecasting algorithms
Information and Computation
Active Learning Using Arbitrary Binary Valued Queries
Machine Learning
The weighted majority algorithm
Information and Computation
Simulating access to hidden information while learning
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
On the Complexity of Function Learning
Machine Learning - Special issue on COLT '93
On the worst-case analysis of temporal-difference learning algorithms
Machine Learning - Special issue on reinforcement learning
Asking questions to minimize errors
Journal of Computer and System Sciences
On-line prediction and conversion strategies
Machine Learning
Journal of the ACM (JACM)
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Machine Learning
Machine Learning
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
Theory revision with queries: horn, read-once, and parity formulas
Artificial Intelligence
The Journal of Machine Learning Research
Learning conditional preference networks
Artificial Intelligence
Semantic communication for simple goals is equivalent to on-line learning
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Hi-index | 0.00 |
We solve an open problem of Maass and Turán, showingthat the optimal mistake-bound when learning a given concept classwithout membership queries is within a constant factor of the optimalnumber of mistakes plus membership queries required by an algorithmthat can ask membership queries. Previously known results imply thatthe constant factor in our bound is best possible.We then show that,in a natural generalization of the mistake-bound model, the usefulnessto the learner of arbitrary “yes-no” questions between trials isvery limited. We show that several natural structural questions aboutrelatives of the mistake-bound model can be answered through theapplication of this general result. Most of these results can beinterpreted as saying that learning in apparently less powerful (andmore realistic) models is not much more difficult than learning inmore powerful models.