A Note on Auto-tuning GEMM for GPUs

  • Authors:
  • Yinan Li;Jack Dongarra;Stanimire Tomov

  • Affiliations:
  • University of Tennessee, USA;University of Tennessee, USA and Oak Ridge National Laboratory, USA and University of Manchester, UK;University of Tennessee, USA

  • Venue:
  • ICCS '09 Proceedings of the 9th International Conference on Computational Science: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM. However, the current best GEMM performance, e.g. of up to 375 GFlop/s in single precision and of up to 75 GFlop/s in double precision arithmetic on NVIDIA's GTX 280, is difficult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27% in certain cases for both single and double precision GEMMs on the GTX 280).