Fast implementation of DGEMM on Fermi GPU

  • Authors:
  • Guangming Tan;Linchuan Li;Sean Triechle;Everett Phillips;Yungang Bao;Ninghui Sun

  • Affiliations:
  • Key Laboratory of Computer Architecture, Institute of Computing Technology;Key Laboratory of Computer Architecture, Institute of Computing Technology;Nvidia Corporation;Nvidia Corporation;Key Laboratory of Computer Architecture, Institute of Computing Technology;Key Laboratory of Computer Architecture, Institute of Computing Technology

  • Venue:
  • Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present a thorough experience on tuning double-precision matrix-matrix multiplication (DGEM-M) on the Fermi GPU architecture. We choose an optimal algorithm with blocking in both shared memory and registers to satisfy the constraints of the Fermi memory hierarchy. Our optimization strategy is further guided by a performance modeling based on micro-architecture benchmarks. Our optimizations include software pipelining, use of vector memory operations, and instruction scheduling. Our best CUDA algorithm achieves comparable performance with the latest CUBLAS library. We further improve upon this with an implementation in the native machine language, leading to 20% increase in performance. That is, the achieved peak performance (efficiency) is improved from 302Gflop/s (58%) to 362Gflop/s (70%).