Error estimates for scattered data interpolation on spheres
Mathematics of Computation
Leave-one-out bounds for kernel methods
Neural Computation
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Regularization schemes with an @?^1-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the @?^1-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the norming set condition.