Predicting {0, 1}-functions on randomly drawn points
Information and Computation
Linear Algebraic Proofs of VC-Dimension Based Inequalities
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
On space-bounded learning and the vapnik-chervonenkis dimension
On space-bounded learning and the vapnik-chervonenkis dimension
The Journal of Machine Learning Research
A Compression Approach to Support Vector Model Selection
The Journal of Machine Learning Research
Tutorial on Practical Prediction Theory for Classification
The Journal of Machine Learning Research
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
The one-inclusion graph algorithm is near-optimal for the prediction model of learning
IEEE Transactions on Information Theory
Shifting: One-inclusion mistake bounds and sample compression
Journal of Computer and System Sciences
Hi-index | 0.00 |
We give a compression scheme for any maximum class of VC dimension d that compresses any sample consistent with a concept in the class to at most d unlabeled points from the domain of the sample.