Regularization theory and neural networks architectures
Neural Computation
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Support Vectors Selection by Linear Programming
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 5 - Volume 5
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Least squares regression with l1 -regularizer in sum space
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
The selection of the penalty functional is critical for the performance of a regularized learning algorithm, and thus it deserves special attention. In this article, we present a least square regression algorithm based on lp-coefficient regularization. Comparing with the classical regularized least square regression, the new algorithm is different in the regularization term. Our primary focus is on the error analysis of the algorithm. An explicit learning rate is derived under some ordinary assumptions.