A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support Vector Machines and the Bayes Rule in Classification
Data Mining and Knowledge Discovery
The covering number in learning theory
Journal of Complexity
Support vector machines are universally consistent
Journal of Complexity
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A note on different covering numbers in learning theory
Journal of Complexity
IEEE Transactions on Information Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
The importance of convexity in learning with squared loss
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Improving the sample complexity using global data
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Multi-kernel regularized classifiers
Journal of Complexity
Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Aggregation of SVM Classifiers Using Sobolev Spaces
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Parzen windows for multi-class classification
Journal of Complexity
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Learning from dependent observations
Journal of Multivariate Analysis
Learning rates of gradient descent algorithm for classification
Journal of Computational and Applied Mathematics
Learning from uniformly ergodic Markov chains
Journal of Complexity
Error bounds of multi-graph regularized semi-supervised classification
Information Sciences: an International Journal
Gradient learning in a classification setting by gradient descent
Journal of Approximation Theory
Semisupervised multicategory classification with imperfect model
IEEE Transactions on Neural Networks
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Soft fuzzy rough sets for robust feature evaluation and selection
Information Sciences: an International Journal
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Logistic classification with varying Gaussians
Computers & Mathematics with Applications
Robust fuzzy rough classifiers
Fuzzy Sets and Systems
The consistency analysis of coefficient regularized classification with convex loss
WSEAS Transactions on Mathematics
Support vector machines with beta-mixing input sequences
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains
Advances in Computational Mathematics
Classification with non-i.i.d. sampling
Mathematical and Computer Modelling: An International Journal
Learning Rates for Regularized Classifiers Using Trigonometric Polynomial Kernels
Neural Processing Letters
Full length article: Support vector machines regression with l1-regularizer
Journal of Approximation Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
A maximum-margin genetic algorithm for misclassification cost minimizing feature selection problem
Expert Systems with Applications: An International Journal
Conditional quantiles with varying Gaussians
Advances in Computational Mathematics
Compressed classification learning with Markov chain samples
Neural Networks
Generalization Bounds of Regularization Algorithm with Gaussian Kernels
Neural Processing Letters
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 0.00 |
The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: regularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding approximation error for reproducing kernel Hilbert spaces. We are mainly concerned about the regularization error. It is estimated for general distributions by a K-functional in weighted Lq spaces. For weakly separable distributions (i.e., the margin may be zero) satisfactory convergence rates are provided by means of separating functions. A projection operator is introduced, which leads to better sample error estimates especially for small complexity kernels. The misclassification error is bounded by the V-risk associated with a general class of loss functions V. The difficulty of bounding the offset is overcome. Polynomial kernels and Gaussian kernels are used to demonstrate the main results. The choice of the regularization parameter plays an important role in our analysis.