A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Model Selection and Error Estimation
Machine Learning
Model Selection and Error Estimation
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Quantum optimization for training support vector machines
Neural Networks - 2003 Special issue: Advances in neural networks research IJCNN'03
Data-dependent margin-based generalization bounds for classification
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Non-convex onion-peeling using a shape hull algorithm
Pattern Recognition Letters
Trading convexity for scalability
ICML '06 Proceedings of the 23rd international conference on Machine learning
Model selection by bootstrap penalization for classification
Machine Learning
Sparseness vs Estimating Conditional Probabilities: Some Asymptotic Results
The Journal of Machine Learning Research
An empirical evaluation of deep architectures on problems with many factors of variation
Proceedings of the 24th international conference on Machine learning
Robust support vector machine training via convex outlier ablation
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Unlabeled patterns to tighten Rademacher complexity error bounds for kernel classifiers
Pattern Recognition Letters
Hi-index | 0.01 |
The Maximal Discrepancy (MD) is a powerful statistical method, which has been proposed for model selection and error estimation in classification problems. This approach is particularly attractive when dealing with small sample problems, since it avoids the use of a separate validation set. Unfortunately, the MD method requires a bounded loss function, which is usually avoided by most learning algorithms, including the Support Vector Machine (SVM), because it gives rise to a non-convex optimization problem. We derive in this work a new approach for rigorously applying the MD technique to the error estimation of the SVM and, at the same time, preserving the original SVM framework.