Partitioning and geometric embedding of range spaces of finite Vapnik-Chervonenkis dimension
SCG '87 Proceedings of the third annual symposium on Computational geometry
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
Sphere packing numbers for subsets of the Boolean n-cube with bounded Vapnik-Chervonenkis dimension
Journal of Combinatorial Theory Series A
Characterizations of learnability for classes of {0, …, n}-valued functions
Journal of Computer and System Sciences
Discrete Applied Mathematics - Special issue: Vapnik-Chervonenkis dimension
Sample Compression, Learnability, and the Vapnik-Chervonenkis Dimension
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
On space-bounded learning and the vapnik-chervonenkis dimension
On space-bounded learning and the vapnik-chervonenkis dimension
Unlabeled Compression Schemes for Maximum Classes
The Journal of Machine Learning Research
Unlabeled compression schemes for maximum classes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Mathematical and Computer Modelling: An International Journal
The one-inclusion graph algorithm is near-optimal for the prediction model of learning
IEEE Transactions on Information Theory
One-inclusion hypergraph density revisited
Information Processing Letters
Journal of Computer and System Sciences
A geometric approach to sample compression
The Journal of Machine Learning Research
Sauer's bound for a notion of teaching complexity
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
We present new expected risk bounds for binary and multiclass prediction, and resolve several recent conjectures on sample compressibility due to Kuzmin and Warmuth. By exploiting the combinatorial structure of concept class F, Haussler et al. achieved a VC(F)/n bound for the natural one-inclusion prediction strategy. The key step in their proof is a d=VC(F) bound on the graph density of a subgraph of the hypercube-one-inclusion graph. The first main result of this paper is a density bound of n(n-1=