Data-flow algorithms for parallel matrix computation
Communications of the ACM
Column LU factorization with pivoting on a message-passing multiprocessor
SIAM Journal on Algebraic and Discrete Methods
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
A parallel triangular solver for distributed-memory multiprocessor
SIAM Journal on Scientific and Statistical Computing
Parallel solution of triangular systems on distributed-memory multiprocessors
SIAM Journal on Scientific and Statistical Computing
Introduction to Parallel & Vector Solution of Linear Systems
Introduction to Parallel & Vector Solution of Linear Systems
The impact of vector and parallel architectures on the Gaussian elimination algorithm
The impact of vector and parallel architectures on the Gaussian elimination algorithm
Compiling Fortran D for MIMD distributed-memory machines
Communications of the ACM
Parallel Algorithms for Matrix Computations
Parallel Algorithms for Matrix Computations
Parallel LU Decomposition on a Transputer Network
Proceedings of the Shell Conference on Parallel Computing
Gaussian Elimination on Message Passing Architecture
Proceedings of the 1st International Conference on Supercomputing
Matrix Decomposition on the Star Graph
IEEE Transactions on Parallel and Distributed Systems
On the Performance of Parallel Matrix Factorisation on the Hypermesh
The Journal of Supercomputing
IEEE Parallel & Distributed Technology: Systems & Technology
Node-ranking schemes for the star networks
Journal of Parallel and Distributed Computing
Generalized methods for algorithm development on optical systems
The Journal of Supercomputing
Modelling and analysis of communication overhead for parallel matrix algorithms
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
Two issues in linear algebra algorithms for multicomputers are addressed. First, how tounify parallel implementations of the same algorithm in a decomposition-independent way. Second, how to optimize naive parallel programs maintaining the decompositionindependence. Several matrix decompositions are viewed as instances of a more generalallocation function called subcube matrix decomposition. By this meta-decomposition, aprogramming environment characterized by general primitives that allow one to designmeta-algorithms independently of a particular decomposition. The authors apply such aframework to the parallel solution of dense matrices. This demonstrates that most of theexisting algorithms can be derived by suitably setting the primitives used in themeta-algorithm. A further application of this programming style concerns the optimization of parallel algorithms. The idea to overlap communication and computation has been extended from 1-D decompositions to 2-D decompositions. Thus, a first attempt towards a decomposition-independent definition of such optimization strategies is provided.