A control-structure splitting optimization for GPGPU

  • Authors:
  • Snaider Carrillo;Jakob Siegel;Xiaoming Li

  • Affiliations:
  • University of Delaware, Newark, USA;University of Delaware, Newark, USA;University of Delaware, Newark, USA

  • Venue:
  • Proceedings of the 6th ACM conference on Computing frontiers
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Control statements in a GPU program such as loops and branches pose serious challenges for the efficient usage of GPU resources because those control statements will lead to the serialization of threads and consequently ruin the occupancy of GPU, that is, the number of threads running concurrently. Unlike traditional vector processing units that are inside a general purpose processor, the GPU cannot leave the control statements to the CPU because fine-grain statement scheduling between GPU and CPU is impossible. We need an effective method to handle the control statements "just in place" on the GPUs. In this paper, we propose novel techniques to transform control statements so that they can be executed efficiently on GPUs. Our techniques smartly increase code redundancy, which might be deemed as "de-optimization" for CPU, to improve the occupancy of a program on GPU and therefore improve performance. We focus our attention on how common programming structures such as loops and branches decrease the occupancy of single kernels and how to counter that. We demonstrate our optimizations on a synthetic benchmark and a complex parallel algorithm, the Lattice Boltzmann Method (LBM). Our results show that these techniques are very efficient and can lead to an increase in occupancy and a drastic improvement in performance compared to non-split version of the programs.