Automatic Pattern Recognition: A Study of the Probability of Error
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Approximation algorithms for NP-hard problems
Approximation algorithms for NP-hard problems
Self bounding learning algorithms
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A sharp concentration inequality with application
Random Structures & Algorithms
On the VC Dimension of Bounded Margin Classifiers
Machine Learning
Model Selection and Error Estimation
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
On the Importance of Small Coordinate Projections
The Journal of Machine Learning Research
Convex sets as prototypes for classifying patterns
Engineering Applications of Artificial Intelligence
A randomized sphere cover classifier
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
The convex subclass method: combinatorial classifier based on a family of convex sets
MLDM'05 Proceedings of the 4th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
We extend the VC theory of statistical learning to data dependent spaces of classifiers. This theory can be viewed as a decomposition of classifier design into two components; the first component is a restriction to a data dependent hypothesis class and the second is empirical risk minimization within that class. We define a measure of complexity for data dependent hypothesis classes and provide data dependent versions of bounds on error deviance and estimation error. We also provide a structural risk minimization procedure over data dependent hierarchies and prove consistency. We use this theory to provide a framework for studying the trade-offs between performance and computational complexity in classifier design. As a consequence we obtain a new family of classifiers with dimension independent performance bounds and efficient learning procedures.