Minimizing communication in sparse matrix solvers

  • Authors:
  • Marghoob Mohiyuddin;Mark Hoemmen;James Demmel;Katherine Yelick

  • Affiliations:
  • University of California at Berkeley, CA and Lawrence Berkeley National Laboratory, Berkeley, CA;University of California at Berkeley, CA;University of California at Berkeley, CA;University of California at Berkeley, CA and Lawrence Berkeley National Laboratory, Berkeley, CA

  • Venue:
  • Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data communication within the memory system of a single processor node and between multiple nodes in a system is the bottleneck in many iterative sparse matrix solvers like CG and GMRES. Here k iterations of a conventional implementation perform k sparse-matrix-vector-multiplications and Ω(k) vector operations like dot products, resulting in communication that grows by a factor of Ω(k) in both the memory and network. By reorganizing the sparse-matrix kernel to compute a set of matrix-vector products at once and reorganizing the rest of the algorithm accordingly, we can perform k iterations by sending O(log P) messages instead of O(k · log P) messages on a parallel machine, and reading the matrix A from DRAM to cache just once, instead of k times on a sequential machine. This reduces communication to the minimum possible. We combine these techniques to form a new variant of GMRES. Our shared-memory implementation on an 8-core Intel Clovertown gets speedups of up to 4.3x over standard GMRES, without sacrificing convergence rate or numerical stability.