Optimization of sparse matrix-vector multiplication on emerging multicore platforms

  • Authors:
  • Samuel Williams;Leonid Oliker;Richard Vuduc;John Shalf;Katherine Yelick;James Demmel

  • Affiliations:
  • CRD/NERSC, Lawrence Berkeley National Laboratory, One Cyclotron Rd., MS:50A-1148, Berkeley, CA 94720, USA and Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, U ...;CRD/NERSC, Lawrence Berkeley National Laboratory, One Cyclotron Rd., MS:50A-1148, Berkeley, CA 94720, USA;College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0765, USA;CRD/NERSC, Lawrence Berkeley National Laboratory, One Cyclotron Rd., MS:50A-1148, Berkeley, CA 94720, USA;CRD/NERSC, Lawrence Berkeley National Laboratory, One Cyclotron Rd., MS:50A-1148, Berkeley, CA 94720, USA and Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, U ...;Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA

  • Venue:
  • Parallel Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.