An insightful program performance tuning chain for GPU computing

  • Authors:
  • Haipeng Jia;Yunquan Zhang;Guoping Long;Shengen Yan

  • Affiliations:
  • Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, College of Information Science and Engineering, The Ocean University of China, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, State Key Laboratory of Computing Science, The Chinese Academy of Sciences, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, State Key Laboratory of Computing Science, The Chinese Academy of Sciences, China, G ...

  • Venue:
  • ICA3PP'12 Proceedings of the 12th international conference on Algorithms and Architectures for Parallel Processing - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is challenging to optimize GPU kernels because this progress requires deep technical knowledge of the underlying hardware. Modern GPU architectures are becoming more and more diversified, which further exacerbates the already difficult problem of performance optimization. This paper presents an insightful performance tuning chain for GPUs. The goal is to help non-expert programmers with limited knowledge of GPU architectures implement high performance GPU kernels directly. We achieve it by providing performance information to identify GPU program performance bottlenecks and decide which optimization methods should be adopted, so as to facilitate the best match between algorithm features and underlying hardware characteristics. To demonstrate the usage of tuning chain, we optimize three representative GPU kernels with different compute intensity: Matrix Transpose, Laplace Transform and Integral on both NVIDIA and AMD GPUs. Experimental results demonstrate that under the guidance of our tuning chain, performance of those kernels achieves 7.8~42.4 times speedup compared to their naïve implementations on both NVIDIA and AMD GPU platforms.