IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Parallel Complexity of Matrix Multiplication
The Journal of Supercomputing
Analyzing block locality in Morton-order and Morton-hybrid matrices
MEDEA '06 Proceedings of the 2006 workshop on MEmory performance: DEaling with Applications, systems and architectures
Analyzing block locality in Morton-order and Morton-hybrid matrices
ACM SIGARCH Computer Architecture News
Low-cost client puzzles based on modular exponentiation
ESORICS'10 Proceedings of the 15th European conference on Research in computer security
Hi-index | 0.00 |
\math. We show that such an algorithm can be parallelized on a distributed memory parallel computer (DMPC) in \math time by using \math processors. Such a parallel computation is cost optimal and matches the performance of PRAM. Further-more, our parallelization on a DMPC can be made fully scalable, that is, for all \math, multiplying two \math matrices can be performed by a DMPC with p processors in \math time, i.e., linear speedup and cost optimality can be achieved in the range \math. This unifies all known algorithms for matrix multiplication on DMPC, standard or non-standard, sequential or parallel. Extensions of our methods and results to other parallel systems are also presented. The above claims result in significant progress in scalable parallel matrix multiplication (as well as solving many other important problems) on distributed memory systems, both theoretically and practically.