A bridging model for parallel computation
Communications of the ACM
Software pipelining with register allocation and spilling
MICRO 27 Proceedings of the 27th annual international symposium on Microarchitecture
Communications of the ACM
Optimizing compilers for modern architectures: a dependence-based approach
Optimizing compilers for modern architectures: a dependence-based approach
A Systolic Array Optimizing Compiler
A Systolic Array Optimizing Compiler
Nodal high-order methods on unstructured grids
Journal of Computational Physics
Brook for GPUs: stream computing on graphics hardware
ACM SIGGRAPH 2004 Papers
Metaprogramming GPUs with Sh
Merrimac: Supercomputing with Streams
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
Proceedings of the tenth ACM SIGPLAN international conference on Functional programming
LISP 1.5 Programmer's Manual
SAGE: system for algebra and geometry experimentation
ACM SIGSAM Bulletin
Implementing an embedded GPU language by combining translation and generation
Proceedings of the 2006 ACM symposium on Applied computing
Accelerator: using data parallelism to program GPUs for general-purpose uses
Proceedings of the 12th international conference on Architectural support for programming languages and operating systems
Journal of Parallel and Distributed Computing
Programming in Lua, Second Edition
Programming in Lua, Second Edition
Larrabee: a many-core x86 architecture for visual computing
ACM SIGGRAPH 2008 papers
BSGP: bulk-synchronous GPU programming
ACM SIGGRAPH 2008 papers
Scientific Programming - Parallel/High-Performance Object-Oriented Scientific Computing (POOSC '05), Glasgow, UK, 25 July 2005
The ruby programming language
hiCUDA: a high-level directive-based language for GPU programming
Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units
Python Scripting for Computational Science
Python Scripting for Computational Science
Nodal discontinuous Galerkin methods on graphics processors
Journal of Computational Physics
JCUDA: A Programmer-Friendly Interface for Accelerating Java Programs with CUDA
Euro-Par '09 Proceedings of the 15th International Euro-Par Conference on Parallel Processing
Implementing sparse matrix-vector multiplication on throughput-oriented processors
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Copperhead: compiling an embedded data parallel language
Proceedings of the 16th ACM symposium on Principles and practice of parallel programming
An automatic OpenCL compute kernel generator for basic linear algebra operations
Proceedings of the 2012 Symposium on High Performance Computing
Parallel electronic structure calculations using multiple graphics processing units (GPUs)
PARA'12 Proceedings of the 11th international conference on Applied Parallel and Scientific Computing
Exploiting heterogeneous parallelism with the Heterogeneous Programming Library
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
High-performance computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL, two open-source toolkits that supports this technique. In introducing PyCUDA and PyOpenCL, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static systems. The concept of RTCG is simple and easily implemented using existing, robust infrastructure. Nonetheless it is powerful enough to support (and encourage) the creation of custom application-specific tools by its users. The premise of the paper is illustrated by a wide range of examples where the technique has been applied with considerable success.