An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
The covering number in learning theory
Journal of Complexity
A theoretical characterization of linear SVM-based feature selection
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Some Properties of Regularized Kernel Methods
The Journal of Machine Learning Research
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
A DC-programming algorithm for kernel selection
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Multi-kernel regularized classifiers
Journal of Complexity
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
Parzen windows for multi-class classification
Journal of Complexity
On complexity issues of online learning algorithms
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels
IEEE Transactions on Information Theory
Logistic classification with varying Gaussians
Computers & Mathematics with Applications
Covering numbers of Gaussian reproducing kernel Hilbert spaces
Journal of Complexity
Conditional quantiles with varying Gaussians
Advances in Computational Mathematics
Generalization Bounds of Regularization Algorithm with Gaussian Kernels
Neural Processing Letters
Hi-index | 0.00 |
This paper considers binary classification algorithms generated from Tikhonov regularization schemes associated with general convex loss functions and varying Gaussian kernels. Our main goal is to provide fast convergence rates for the excess misclassification error. Allowing varying Gaussian kernels in the algorithms improves learning rates measured by regularization error and sample error. Special structures of Gaussian kernels enable us to construct, by a nice approximation scheme with a Fourier analysis technique, uniformly bounded regularizing functions achieving polynomial decays of the regularization error under a Sobolev smoothness condition. The sample error is estimated by using a projection operator and a tight bound for the covering numbers of reproducing kernel Hilbert spaces generated by Gaussian kernels. The convexity of the general loss function plays a very important role in our analysis.