A performance analysis framework for identifying potential benefits in GPGPU applications

  • Authors:
  • Jaewoong Sim;Aniruddha Dasgupta;Hyesoon Kim;Richard Vuduc

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, GA, USA;Advanced Micro Devices, Austin, TX, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA

  • Venue:
  • Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tuning code for GPGPU and other emerging many-core platforms is a challenge because few models or tools can precisely pinpoint the root cause of performance bottlenecks. In this paper, we present a performance analysis framework that can help shed light on such bottlenecks for GPGPU applications. Although a handful of GPGPU profiling tools exist, most of the traditional tools, unfortunately, simply provide programmers with a variety of measurements and metrics obtained by running applications, and it is often difficult to map these metrics to understand the root causes of slowdowns, much less decide what next optimization step to take to alleviate the bottleneck. In our approach, we first develop an analytical performance model that can precisely predict performance and aims to provide programmer-interpretable metrics. Then, we apply static and dynamic profiling to instantiate our performance model for a particular input code and show how the model can predict the potential performance benefits. We demonstrate our framework on a suite of micro-benchmarks as well as a variety of computations extracted from real codes.