Scalable Parallel Matrix Multiplication on Distributed Memory Parallel Computers

  • Authors:
  • Affiliations:
  • Venue:
  • IPDPS '00 Proceedings of the 14th International Symposium on Parallel and Distributed Processing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

\math. We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in \math time by using \math processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Further-more, our parallelization on a DMPC can be made fully scalable, that is, for all \math, multiplying two \math matrices can be performed by a DMPC with p processors in \math time, i.e., linear speedup and cost optimality can be achieved in the range \math. This unifies all known algorithms for matrix multiplication on DMPC, standard or non-standard, sequential or parallel. Extensions of our methods and results to other parallel systems are also presented. The above claims result in significant progress in scalable parallel matrix multiplication (as well as solving many other important problems) on distributed memory systems, both theoretically and practically.