Cache friendly sparse matrix-vector multiplication

  • Authors:
  • Sardar Anisul Haque;Shahadat Hossain;Marc Moreno Maza

  • Affiliations:
  • University of Western Ontario ON, Canada;University of Lethbridge, AB, Canada;University of Western Ontario, ON, Canada

  • Venue:
  • Proceedings of the 4th International Workshop on Parallel and Symbolic Computation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sparse matrix-vector multiplication or SpMXV is an important kernel in scientific computing. For example, the conjugate gradient method (CG) is an iterative linear system solving process where multiplication of the coefficient matrix A with a dense vector x is the main computational step accounting for as much as 90% of the overall running time. Though the total number of arithmetic operations (involving nonzero entries only) to compute Ax is fixed, reducing the probability of cache misses per operation is still a challenging area of research. This preprocessing is done once and its cost is amortized by repeated multiplications. Computers that employ cache memory to improve the speed of data access rely on reuse of data that are brought into the cache memory. The challenge is to exploit data locality especially for unstructured problems: modeling data locality in this context is hard.