On the Vgamma Dimension for Regression in Reproducing Kernel Hilbert Spaces
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Multi-kernel regularized classifiers
Journal of Complexity
Consistency and Convergence Rates of One-Class SVMs and Related Algorithms
The Journal of Machine Learning Research
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Support vector censored quantile regression under random censoring
Computational Statistics & Data Analysis
Almost-everywhere algorithmic stability and generalization error
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Regularized classifiers are known to be a kind of kernel-based classification methods generated from Tikhonov regularization schemes, and the trigonometric polynomial kernels are ones of the most important kernels and play key roles in signal processing. The main target of this paper is to provide convergence rates of classification algorithms generated by regularization schemes with trigonometric polynomial kernels. As a special case, an error analysis for the support vector machines (SVMs) soft margin classifier is presented. The norms of Fejér operator in reproducing kernel Hilbert space and properties of approximation of the operator in L 1 space with periodic function play key roles in the analysis of regularization error. Some new bounds on the learning rate of regularization algorithms based on the measure of covering number for normalized loss functions are established. Together with the analysis of sample error, the explicit learning rates for SVM are also derived.