The nature of statistical learning theory
The nature of statistical learning theory
An equivalence between sparse approximation and support vector machines
Neural Computation
Error estimates for scattered data interpolation on spheres
Mathematics of Computation
Inverse and saturation theorems for radial basis function interpolation
Mathematics of Computation
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
On the Convergence of a General Class of Finite Volume Methods
SIAM Journal on Numerical Analysis
Neural Computation
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
An estimate for multivariate interpolation II
Journal of Approximation Theory
Optimal Rates for the Regularized Least-Squares Algorithm
Foundations of Computational Mathematics
Convergence of Unsymmetric Kernel-Based Meshless Collocation Methods
SIAM Journal on Numerical Analysis
Recovery of functions from weak data using unsymmetric meshless kernel-based methods
Applied Numerical Mathematics
Estimates for functions in Sobolev spaces defined on unbounded domains
Journal of Approximation Theory
Deterministic Error Analysis of Support Vector Regression and Related Regularized Kernel Methods
The Journal of Machine Learning Research
Stability of kernel-based interpolation
Advances in Computational Mathematics
Full length article: Interpolation and approximation in Taylor spaces
Journal of Approximation Theory
Hi-index | 0.00 |
In Numerical Analysis one often has to conclude that an error function is small everywhere if it is small on a large discrete point set and if there is a bound on a derivative. Sampling inequalities put this onto a solid mathematical basis. A stability inequality is similar, but holds only on a finite–dimensional space of trial functions. It allows bounding a trial function by a norm on a sufficiently fine data sample, without any bound on a high derivative. This survey first describes these two types of inequalities in general and shows how to derive a stability inequality from a sampling inequality plus an inverse inequality on a finite–dimensional trial space. Then the state–of–the–art in sampling inequalities is reviewed, and new extensions involving functions of infinite smoothness and sampling operators using weak data are presented. Finally, typical applications of sampling and stability inequalities for recovery of functions from scattered weak or strong data are surveyed. These include Support Vector Machines and unsymmetric methods for solving partial differential equations.