Error estimates for interpolation by compactly supported radial basis functions of minimal degree
Journal of Approximation Theory
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
Statistical Learning Theory: A Primer
International Journal of Computer Vision - special issue on learning and vision at the center for biological and computational learning, Massachusetts Institute of Technology
Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines
Priors Stabilizers and Basis Functions: From Regularization to Radial, Tensor and Additive Splines
Radial Basis Functions
Hi-index | 0.00 |
In this paper, we study the statistical properties of the method of regularization with radial basis functions in the context of linear inverse problems. Radial basis function regularization is widely used in machine learning because of its demonstrated effectiveness in numerous applications and computational advantages. From a statistical viewpoint, one of the main advantages of radial basis function regularization in general and Gaussian radial basis function regularization in particular is their ability to adapt to varying degrees of smoothness in a direct problem. We show here that similar approaches for inverse problems not only share such adaptivity to the smoothness of the signal but also can accommodate different degrees of ill-posedness. These results render further theoretical support to the superior performance observed empirically for radial basis function regularization.