Performance Models for Evaluation and Automatic Tuning of Symmetric Sparse Matrix-Vector Multiply

  • Authors:
  • Benjamin C. Lee;Richard W. Vuduc;James W. Demmel;Katherine A. Yelick

  • Affiliations:
  • University of California at Berkeley;University of California at Berkeley;University of California at Berkeley;University of California at Berkeley

  • Venue:
  • ICPP '04 Proceedings of the 2004 International Conference on Parallel Processing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present optimizations for sparse matrix-vector multiply SpMV and its generalization to multiple vectors, SpMM, when the matrix is symmetric: (1) symmetric storage, (2) register blocking, and (3) vector blocking. Combined with register blocking, symmetry saves more than 50% in matrix storage. We also show performance speedups of 2.1脳 for SpMV and 2.6脳 for SpMM, when compared to the best non-symmetric register blocked implementation. We present an approach for the selection of tuning parameters, based on empirical modeling and search that consists of three steps: (1) Off-line benchmark, (2) Runtime search, and (3) Heuristic performance model. This approach generally selects parameters to achieve performance with 85% of that achieved with exhaustive search. We evaluate our implementations with respect to upper bounds on performance. Our model bounds performance by considering only the cost of memory operations and using lower bounds on the number of cache misses. Our optimized codes are within 68% of the upper bounds.