GPURoofline: a model for guiding performance optimizations on GPUs

  • Authors:
  • Haipeng Jia;Yunquan Zhang;Guoping Long;Jianliang Xu;Shengen Yan;Yan Li

  • Affiliations:
  • Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, College of Information Science and Engineering, The Ocean University of China, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, State Key Laboratory of Computing Science, The Chinese Academy of Sciences, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China;College of Information Science and Engineering, The Ocean University of China, China;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, State Key Laboratory of Computing Science, The Chinese Academy of Sciences, China, G ...;Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, China, State Key Laboratory of Computing Science, The Chinese Academy of Sciences, China, G ...

  • Venue:
  • Euro-Par'12 Proceedings of the 18th international conference on Parallel Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance optimization on GPUs requires deep technical knowledge of the underlying hardware. Modern GPU architectures are becoming more and more diversified, which further exacerbates the already difficult problem. This paper presents GPURoofline, an empirical model for guiding optimizations on GPUs. The goal is to help non-expert programmers with limited knowledge of GPU architectures implement high performance GPU kernels. The model addresses this problem by exploring potential performance bottlenecks and evaluating whether specific optimization techniques bring any performance improvement. To demonstrate the usage of the model, we optimize four representative kernels with different computation densities, namely matrix transpose, Laplace transform, integral and face-dection, on both NVIDIA and AMD GPUs. Experimental results show that under the guidance of GPURoofline, performance of those kernels achieves 3.74˜14.8 times speedup compared to their naïve implementations on both NVIDIA and AMD GPU platforms.