Optimization by Vector Space Methods
Optimization by Vector Space Methods
On Robustness Properties of Convex Risk Minimization Methods for Pattern Recognition
The Journal of Machine Learning Research
Smooth ε-Insensitive Regression by Loss Symmetrization
The Journal of Machine Learning Research
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Estimating the Confidence Interval for Prediction Errors of Support Vector Machine Classifiers
The Journal of Machine Learning Research
Bouligand Derivatives and Robustness of Support Vector Machines for Regression
The Journal of Machine Learning Research
A Bahadur Representation of the Linear Support Vector Machine
The Journal of Machine Learning Research
Support Vector Machines
On qualitative robustness of support vector machines
Journal of Multivariate Analysis
Any Discrimination Rule Can Have an Arbitrarily Bad Probability of Error for Finite Sample Size
IEEE Transactions on Pattern Analysis and Machine Intelligence
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Model selection in kernel ridge regression
Computational Statistics & Data Analysis
Hi-index | 0.01 |
In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized M-estimators for a parameter in a (typically infinite dimensional) reproducing kernel Hilbert space. For smooth loss functions L, it is shown that the difference between the estimator, i.e. the empirical SVM f"L","D"""n","@l"""D"""""""n, and the theoretical SVM f"L","P","@l"""0 is asymptotically normal with rate n. That is, n(f"L","D"""n","@l"""D"""""""n-f"L","P","@l"""0) converges weakly to a Gaussian process in the reproducing kernel Hilbert space. As common in real applications, the choice of the regularization parameter D"n in f"L","D"""n","@l"""D"""""""n may depend on the data. The proof is done by an application of the functional delta-method and by showing that the SVM-functional P@?f"L","P","@l is suitably Hadamard-differentiable.