Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Sparseness of support vector machines
The Journal of Machine Learning Research
Are loss functions all the same?
Neural Computation
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Some Properties of Regularized Kernel Methods
The Journal of Machine Learning Research
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Multi-kernel regularized classifiers
Journal of Complexity
Consistency of kernel-based quantile regression
Applied Stochastic Models in Business and Industry
Learning rates for regularized classifiers using multivariate polynomial kernels
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Elastic-net regularization in learning theory
Journal of Complexity
When Is There a Representer Theorem? Vector Versus Matrix Regularizers
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
It is known that the learning rates are the quantitative description of the consistency of a learning algorithm. In the present paper, we provide the learning rates for the coefficient regularized classification learning algorithm with a K-functional whose explicit rates are estimated when the loss functions are least square loss and the hinge loss.