A fast algorithm for particle simulations
Journal of Computational Physics
Efficient algorithms for computing a strong rank-revealing QR factorization
SIAM Journal on Scientific Computing
Matrix computations (3rd ed.)
High-Order Corrected Trapezoidal Quadrature Rules for Singular Functions
SIAM Journal on Numerical Analysis
Fast and Stable Algorithms for Banded Plus Semiseparable Systems of Linear Equations
SIAM Journal on Matrix Analysis and Applications
On the Compression of Low Rank Matrices
SIAM Journal on Scientific Computing
Hybrid cross approximation of integral operators
Numerische Mathematik
A fast direct solver for boundary integral equations in two dimensions
Journal of Computational Physics
Multipole for scattering computations: spectral discretization, stabilization, fast solvers
Multipole for scattering computations: spectral discretization, stabilization, fast solvers
An Accelerated Kernel-Independent Fast Multipole Method in One Dimension
SIAM Journal on Scientific Computing
A Fast Direct Solver for a Class of Elliptic Partial Differential Equations
Journal of Scientific Computing
Construction of Data-Sparse $\mathcal{H}^2$-Matrices by Hierarchical Compression
SIAM Journal on Scientific Computing
Superfast Multifrontal Method for Large Structured Linear Systems of Equations
SIAM Journal on Matrix Analysis and Applications
Solving a large dense linear system by adaptive cross approximation
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
Randomized sampling has recently been proven a highly efficient technique for computing approximate factorizations of matrices that have low numerical rank. This paper describes an extension of such techniques to a wider class of matrices that are not themselves rank-deficient but have off-diagonal blocks that are; specifically, the class of so-called hierarchically semiseparable (HSS) matrices. HSS matrices arise frequently in numerical analysis and signal processing, particularly in the construction of fast methods for solving differential and integral equations numerically. The HSS structure admits algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc.) to be performed very rapidly, but only once the HSS representation of the matrix has been constructed. How to rapidly compute this representation in the first place is much less well understood. The present paper demonstrates that if an $N\times N$ matrix can be applied to a vector in $O(N)$ time, and if individual entries of the matrix can be computed rapidly, then provided that an HSS representation of the matrix exists, it can be constructed in $O(N\,k^{2})$ operations, where $k$ is an upper bound for the numerical rank of the off-diagonal blocks. The point is that when legacy codes (based on, e.g., the fast multipole method) can be used for the fast matrix-vector multiply, the proposed algorithm can be used to obtain the HSS representation of the matrix, and then well-established techniques for HSS matrices can be used to invert or factor the matrix.