On regularization algorithms in learning theory
Journal of Complexity
Dimensionality reduction and generalization
Proceedings of the 24th international conference on Machine learning
Spectral algorithms for supervised learning
Neural Computation
Generalizing the bias term of support vector machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
On Learning with Integral Operators
The Journal of Machine Learning Research
On spectral windows in supervised learning from data
Information Processing Letters
Reformulated parametric learning based on ordinary differential equations
ICIC'06 Proceedings of the 2006 international conference on Intelligent computing: Part II
Vector field learning via spectral filtering
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
On complexity issues of online learning algorithms
IEEE Transactions on Information Theory
Regularized semi-supervised classification on manifold
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part II
Designing Optimal Spectral Filters for Inverse Problems
SIAM Journal on Scientific Computing
A practical use of regularization for supervised learning with kernel methods
Pattern Recognition Letters
Nonparametric sparsity and regularization
The Journal of Machine Learning Research
Hi-index | 0.06 |
Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.