Shifting: One-inclusion mistake bounds and sample compression
Journal of Computer and System Sciences
Unlabeled compression schemes for maximum classes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 754.84 |
Haussler, Littlestone and Warmuth (1994) described a general-purpose algorithm for learning according to the prediction model, and proved an upper bound on the probability that their algorithm makes a mistake in terms of the number of examples seen and the Vapnik-Chervonenkis (VC) dimension of the concept class being learned. We show that their bound is within a factor of 1+o(1) of the best possible such bound for any algorithm