Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Leave-one-out bounds for kernel methods
Neural Computation
A note on different covering numbers in learning theory
Journal of Complexity
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Multi-kernel regularized classifiers
Journal of Complexity
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Elastic-net regularization in learning theory
Journal of Complexity
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Learning with coefficient-based regularization and ℓ1-penalty
Advances in Computational Mathematics
Statistical analysis of the moving least-squares method with unbounded sampling
Information Sciences: an International Journal
Hi-index | 0.00 |
The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on 驴2-empirical covering numbers, which improves learning rates in the literature.