Optimizing matrix multiplication for a short-vector SIMD architecture - CELL processor

  • Authors:
  • Jakub Kurzak;Wesley Alvaro;Jack Dongarra

  • Affiliations:
  • Department of Electrical Engineering and Computer Science, University of Tennessee, United States;Department of Electrical Engineering and Computer Science, University of Tennessee, United States;Department of Electrical Engineering and Computer Science, University of Tennessee, United States and Computer Science and Mathematics Division, Oak Ridge National Laboratory, United States and Sc ...

  • Venue:
  • Parallel Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Matrix multiplication is one of the most common numerical operations, especially in the area of dense linear algebra, where it forms the core of many important algorithms, including solvers of linear systems of equations, least square problems, and singular and eigenvalue computations. The STI CELL processor exceeds the capabilities of any other processor available today in terms of peak single precision, floating point performance, aside from special purpose accelerators like Graphics Processing Units (GPUs). In order to fully exploit the potential of the CELL processor for a wide range of numerical algorithms, fast implementation of the matrix multiplication operation is essential. The crucial component is the matrix multiplication kernel crafted for the short vector Single Instruction Multiple Data architecture of the Synergistic Processing Element of the CELL processor. In this paper, single precision matrix multiplication kernels are presented implementing the C=C-AxB^T operation and the C=C-AxB operation for matrices of size 64x64 elements. For the latter case, the performance of 25.55 Gflop/s is reported, or 99.80% of the peak, using as little as 5.9 kB of storage for code and auxiliary data structures.