On regularization algorithms in learning theory
Journal of Complexity
Spectral algorithms for supervised learning
Neural Computation
The Journal of Machine Learning Research
Convex multi-task feature learning
Machine Learning
Elastic-net regularization in learning theory
Journal of Complexity
Image and Video Colorization Using Vector-Valued Reproducing Kernel Hilbert Spaces
Journal of Mathematical Imaging and Vision
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Vector field learning via spectral filtering
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Least square regression with lp-coefficient regularization
Neural Computation
On Convergence of Kernel Learning Estimators
SIAM Journal on Optimization
Neural Networks
MMCS'08 Proceedings of the 7th international conference on Mathematical Methods for Curves and Surfaces
Consistency of support vector machines using additive kernels for additive models
Computational Statistics & Data Analysis
Asymptotic normality of support vector machine variants and other regularized kernel methods
Journal of Multivariate Analysis
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Linear regression with random projections
The Journal of Machine Learning Research
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 0.00 |
We develop a theoretical analysis of the performance of the regularized least-square algorithm on a reproducing kernel Hilbert space in the supervised learning setting. The presented results hold in the general framework of vector-valued functions; therefore they can be applied to multitask problems. In particular, we observe that the concept of effective dimension plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. Moreover, a complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel. Finally, we give an improved lower rate result describing worst asymptotic behavior on individual probability measures rather than over classes of priors.