Programmable data dependencies and placements

  • Authors:
  • Eva Burrows;Magne Haveraaen

  • Affiliations:
  • University of Bergen, Bergen, Norway;University of Bergen, Bergen, Norway

  • Venue:
  • DAMP '12 Proceedings of the 7th workshop on Declarative aspects and applications of multicore programming
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the major issues in parallelizing applications is to deal with the inherent dependency structure of the program. Dependence analysis provides execution-order constraints between program statements, and can establish legitimate ways to carry out program code transformations. The concept of data dependency constitutes one class of dependencies obtained through dependence analysis, a form related to data parallelism. Since automatic dependence analysis has proved to be too complex for the general case, parallelizing compilers cannot help parallelizing every dependency pattern. In many cases, the data dependency pattern of a computation is independent from the actual data values, i.e., it is static, though the pattern may scale with the size of the data set. In this paper, we explore how a static, scalable data dependency can be presented to the compiler in a meaningful way. We describe the major components of a proposed framework in which static and possibly scalable data dependencies are turned into programmable entities. The framework provides a high, and easy to manipulate, level to deal with data distribution and placement of computations onto any parallel system which has a well defined space-time communication structure. The data dependency information together with the placement information can be utilised by a compiler to generate parallel code. This presentation explores the idea of programmable data placements in more detail through concrete examples for the CUDA API of Nvidia GPUs.