Fast radial basis function interpolation with Gaussians by localization and iteration
Journal of Computational Physics
The uselessness of the Fast Gauss Transform for summing Gaussian radial basis function series
Journal of Computational Physics
A fast mesh deformation method using explicit interpolation
Journal of Computational Physics
Fast Evaluation of Multiquadric RBF Sums by a Cartesian Treecode
SIAM Journal on Scientific Computing
A Fast Treecode for Multiquadric Interpolation with Varying Shape Parameters
SIAM Journal on Scientific Computing
Difference Filter Preconditioning for Large Covariance Matrices
SIAM Journal on Matrix Analysis and Applications
A Matrix-free Approach for Solving the Parametric Gaussian Process Maximum Likelihood Problem
SIAM Journal on Scientific Computing
A kernel class allowing for fast computations in shape spaces induced by diffeomorphisms
Journal of Computational and Applied Mathematics
Journal of Scientific Computing
Hi-index | 0.03 |
We consider a preconditioned Krylov subspace iterative algorithm presented by Faul, Goodsell, and Powell (IMA J. Numer. Anal. 25 (2005), pp. 1-24) for computing the coefficients of a radial basis function interpolant over $N$ data points. This preconditioned Krylov iteration has been demonstrated to be extremely robust to the distribution of the points and the iteration rapidly convergent. However, the iterative method has several steps whose computational and memory costs scale as $O(N^{2}),$ both in preliminary computations that compute the preconditioner and in the matrix-vector product involved in each step of the iteration. We effectively accelerate the iterative method to achieve an overall cost of $O(N\log N).$ The matrix vector product is accelerated via the use of the fast multipole method. The preconditioner requires the computation of a set of closest points to each point. We develop an $O(N\log N)$ algorithm for this step as well. Results are presented for multiquadric interpolation in $\mathbb{R}^{2}$ and biharmonic interpolation in $\mathbb{R}^{3}$. A novel FMM algorithm for the evaluation of sums involving multiquadric functions in $\mathbb{R}^{2}$ is presented as well.