Regularization theory and neural networks architectures
Neural Computation
Matrix computations (3rd ed.)
Properties of support vector machines
Neural Computation
The Journal of Machine Learning Research
Learning with non-positive kernels
ICML '04 Proceedings of the twenty-first international conference on Machine learning
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
Learning from Examples as an Inverse Problem
The Journal of Machine Learning Research
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
On regularization algorithms in learning theory
Journal of Complexity
The Journal of Machine Learning Research
Large-scale RLSC learning without agony
Proceedings of the 24th international conference on Machine learning
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents
ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
AVSS '09 Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance
On spectral windows in supervised learning from data
Information Processing Letters
Vector field learning via spectral filtering
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Eigenvalues perturbation of integral operator for kernel selection
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Geometrical and computational aspects of Spectral Support Estimation for novelty detection
Pattern Recognition Letters
Hi-index | 0.01 |
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same principle allowing for the numerical stabilization of a matrix inversion problem is crucial to avoid overfitting. The various methods have a common derivation but different computational and theoretical properties. We describe examples of such algorithms, analyze their classification performance on several data sets and discuss their applicability to real-world problems.