Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
Multi-kernel regularized classifiers
Journal of Complexity
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Learnability of Gaussians with Flexible Variances
The Journal of Machine Learning Research
VC Theory of Large Margin Multi-Category Classifiers
The Journal of Machine Learning Research
Aggregation of SVM Classifiers Using Sobolev Spaces
The Journal of Machine Learning Research
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Learning from dependent observations
Journal of Multivariate Analysis
Oracle inequalities for support vector machines that are based on random entropy numbers
Journal of Complexity
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Radial kernels and their reproducing kernel Hilbert spaces
Journal of Complexity
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Optimal oracle inequality for aggregation of classifiers under low noise condition
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Exponential convergence rates in classification
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Hi-index | 0.00 |
We establish learning rates to the Bayes risk for support vector machines (SVMs) using a regularization sequence ${\it \lambda}_{n}={\it n}^{-\rm \alpha}$, where ${\it \alpha}\in$(0,1) is arbitrary. Under a noise condition recently proposed by Tsybakov these rates can become faster than n−1/2. In order to deal with the approximation error we present a general concept called the approximation error function which describes how well the infinite sample versions of the considered SVMs approximate the data-generating distribution. In addition we discuss in some detail the relation between the “classical” approximation error and the approximation error function. Finally, for distributions satisfying a geometric noise assumption we establish some learning rates when the used RKHS is a Sobolev space.