Leave-one-out bounds for kernel methods
Neural Computation
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Multi-kernel regularized classifiers
Journal of Complexity
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Support Vector Machines
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Concentration estimates for learning with unbounded sampling
Advances in Computational Mathematics
Approximation and estimation bounds for free knot splines
Computers & Mathematics with Applications
Consistent identification of Wiener systems: A machine learning viewpoint
Automatica (Journal of IFAC)
Generalization ability of fractional polynomial models
Neural Networks
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 0.00 |
A standard assumption in theoretical study of learning algorithms for regression is uniform boundedness of output sample values. This excludes the common case with Gaussian noise. In this paper we investigate the learning algorithm for regression generated by the least squares regularization scheme in reproducing kernel Hilbert spaces without the assumption of uniform boundedness for sampling. By imposing some incremental conditions on moments of the output variable, we derive learning rates in terms of regularity of the regression function and capacity of the hypothesis space. The novelty of our analysis is a new covering number argument for bounding the sample error.