Preemption of a CUDA Kernel Function

  • Authors:
  • Jon Calhoun;Hai Jiang

  • Affiliations:
  • -;-

  • Venue:
  • SNPD '12 Proceedings of the 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

As graphics processing units (GPUs) gain adoption as general purpose parallel compute devices, several key problems need to be addressed in order for their use to become more practical and more user friendly. One such problem is special functions designed to execute on GPUs called kernel functions are non-preempt able. Once the kernel is issued to the GPU it will remain there till either execution finishes or it is killed. If the kernel uses all the execution units of the GPU, then no other kernels are able to be executed. This paper proposes a way to apply preemption to the executing kernel function. The kernel at some point in its execution will be able to save its state, halt execution, and free up the GPU's execution units for other kernels to run. After a given amount of time the halted kernel will be able to regain control of the GPU and complete its execution as if it never was halted in the first place. Experimental results have demonstrated the effectiveness of the proposed scheme.