Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Leave-one-out bounds for kernel methods
Neural Computation
A Variational Approach to Remove Outliers and Impulse Noise
Journal of Mathematical Imaging and Vision
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Reproducing Kernel Banach Spaces for Machine Learning
The Journal of Machine Learning Research
Approximation of high-dimensional kernel matrices by multilevel circulant matrices
Journal of Complexity
Least square regression with lp-coefficient regularization
Neural Computation
A property of the minimum vectors of a regularizing functionaldefined by means of the absolute norm
IEEE Transactions on Signal Processing
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Vector-valued reproducing kernel Banach spaces with applications to multi-task learning
Journal of Complexity
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Hi-index | 0.00 |
A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error, and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the â聞聯1 norm can be applied to improve the learning rate estimate of â聞聯1-regularization in machine learning.