A composition theorem for learning algorithms with applications to geometric concept classes
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Machine Learning for the Detection of Oil Spills in Satellite Radar Images
Machine Learning - Special issue on applications of machine learning and the knowledge discovery process
Machine Learning
Adaptive Model Selection for Digital Linear Classifiers
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
On Teaching and Learning Intersection-Closed Concept Classes
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
A Microchoice Bound for Continuous-Space Classification Algorithms
Machine Learning
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A Compression Approach to Support Vector Model Selection
The Journal of Machine Learning Research
Detection and prediction of distance-based outliers
Proceedings of the 2005 ACM symposium on Applied computing
New lower bounds for statistical query learning
Journal of Computer and System Sciences - Special issue on COLT 2002
PAC-Bayes risk bounds for sample-compressed Gibbs classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
The cross entropy method for classification
ICML '05 Proceedings of the 22nd international conference on Machine learning
Distance-Based Detection and Prediction of Outliers
IEEE Transactions on Knowledge and Data Engineering
A new PAC bound for intersection-closed concept classes
Machine Learning
Proceedings of the 24th international conference on Machine learning
Condensed Nearest Neighbor Data Domain Description
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
The Journal of Machine Learning Research
Optimum Neural Network Construction Via Linear Programming Minimum Sphere Set Covering
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Nearly Uniform Validation Improves Compression-Based Error Bounds
The Journal of Machine Learning Research
Shifting: One-inclusion mistake bounds and sample compression
Journal of Computer and System Sciences
Prototype-based Domain Description
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Performance prediction for exponential language models
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Least conservative support and tolerance tubes
IEEE Transactions on Information Theory
Efficient wavelet adaptation for hybrid wavelet-large margin classifiers
Pattern Recognition
On the sparseness of 1-norm support vector machines
Neural Networks
Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm
ACM Transactions on Algorithms (TALG)
Learning-based robot vision: principles and applications
Learning-based robot vision: principles and applications
Sparse ensembles using weighted combination methods based on linear programming
Pattern Recognition
Recursive teaching dimension, learning complexity, and maximum classes
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
A hierarchical shrinking decision tree for imbalanced datasets
DNCOCO'06 Proceedings of the 5th WSEAS international conference on Data networks, communications and computers
A selective sampling strategy for label ranking
ECML'06 Proceedings of the 17th European conference on Machine Learning
Margin-sparsity trade-off for the set covering machine
ECML'05 Proceedings of the 16th European conference on Machine Learning
A new perspective on an old perceptron algorithm
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Unlabeled compression schemes for maximum classes
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Estimation of a data-collection maturity model to detect manufacturing change
Expert Systems with Applications: An International Journal
ANNPR'06 Proceedings of the Second international conference on Artificial Neural Networks in Pattern Recognition
A pseudo-boolean set covering machine
CP'12 Proceedings of the 18th international conference on Principles and Practice of Constraint Programming
Sauer's bound for a notion of teaching complexity
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Massive online teaching to bounded learners
Proceedings of the 4th conference on Innovations in Theoretical Computer Science
Information Processing Letters
Hi-index | 0.06 |
Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ⊆ 2X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixed-size for a class C is sufficient to ensure that the class C is pac-learnable.Previous work has shown that a class is pac-learnable if and only if the Vapnik-Chervonenkis (VC) dimension of the class is finite. In the second half of this paper we explore the relationship between sample compression schemes and the VC dimension. We define maximum and maximal classes of VC dimension d. For every maximum class of VC dimension d, there is a sample compression scheme of size d, and for sufficiently-large maximum classes there is no sample compression scheme of size less than d. We discuss briefly classes of VC dimension d that are maximal but not maximum. It is an open question whether every class of VC dimension d has a sample compression scheme of size O(d).