Parallel implementation of multifrontal schemes
Parallel Computing
Solution of sparse positive definite systems on a shared-memory multiprocessor
International Journal of Parallel Programming
A modified frontal technique suitable for parallel systems
SIAM Journal on Scientific and Statistical Computing
Error Analysis of Direct Methods of Matrix Inversion
Journal of the ACM (JACM)
Some Design Features of a Sparse Matrix Code
ACM Transactions on Mathematical Software (TOMS)
Numerical Methods
ACM SIGNUM Newsletter
Efficient sparse matrix factorization for circuit simulation on vector supercomputers
DAC '89 Proceedings of the 26th ACM/IEEE Design Automation Conference
Parallel treatment of general sparse matrices
LSSC'05 Proceedings of the 5th international conference on Large-Scale Scientific Computing
Hi-index | 14.98 |
A paradigm for concurrent computing is explored in which a group of autonomous, asynchronous processes shares a common memory space and cooperates to solve a single problem. The processes synchronize with only a few others at a time; barrier synchronization is not permitted except at the beginning and end of the computation. The paradigm maps directly to a shared-memory multiprocessor with efficient synchronization primitives and is applied to the solution of a large, sparse system of linear equations. The algorithm, called pairwise solve (or PSolve), is presented with several variants to address some of the limitations of previous algorithms. On the Alliant FX/8, PSolve is faster than Gaussian elimination and two common sparse matrix algorithms.