Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks

  • Authors:
  • Aydin Buluç;Jeremy T. Fineman;Matteo Frigo;John R. Gilbert;Charles E. Leiserson

  • Affiliations:
  • University of California, Santa Barbara, Santa Barbara, CA, USA;Massachusetts Institute of Technology, Cambridge, MA, USA;Cilk Arts, Inc., Burlington, MA, USA;University of California, Santa Barbara, Santa Barbara, CA, USA;Massachusetts Institute of Technology, Cambridge, MA, USA

  • Venue:
  • Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper introduces a storage format for sparse matrices, called compressed sparse blocks (CSB), which allows both Ax and A,x to be computed efficiently in parallel, where A is an n×n sparse matrix with nnzen nonzeros and x is a dense n-vector. Our algorithms use Θ(nnz) work (serial running time) and Θ(√nlgn) span (critical-path length), yielding a parallelism of Θ(nnz/√nlgn), which is amply high for virtually any large matrix. The storage requirement for CSB is the same as that for the more-standard compressed-sparse-rows (CSR) format, for which computing Ax in parallel is easy but A,x is difficult. Benchmark results indicate that on one processor, the CSB algorithms for Ax and A,x run just as fast as the CSR algorithm for Ax, but the CSB algorithms also scale up linearly with processors until limited by off-chip memory bandwidth.