On Privatization of Variables for Data-Parallel Execution
IPPS '97 Proceedings of the 11th International Symposium on Parallel Processing
Brook for GPUs: stream computing on graphics hardware
ACM SIGGRAPH 2004 Papers
ACM SIGGRAPH 2004 Papers
Optimization principles and application performance evaluation of a multithreaded GPU using CUDA
Proceedings of the 13th ACM SIGPLAN Symposium on Principles and practice of parallel programming
Program optimization space pruning for a multithreaded gpu
Proceedings of the 6th annual IEEE/ACM international symposium on Code generation and optimization
A performance study of general-purpose applications on graphics processors using CUDA
Journal of Parallel and Distributed Computing
Program optimization carving for GPU computing
Journal of Parallel and Distributed Computing
Architecture-aware optimization targeting multithreaded stream computing
Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units
Comparing Hardware Accelerators in Scientific Applications: A Case Study
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.00 |
The growth of 3D rendering and gaming requirements makes integration of CPU and GPUs more popular. This heterogeneous integration also leads to difficulty in programming. Accordingly, an open standard parallel paradigm, OpenCL, is proposed to form a unified programming style for various GPU platforms. The overall performance of OpenCL programmes highly depends on their programming style and optimisation method. In this study, we discuss several optimising techniques for OpenCL, which includes massively multithreading, vectorisation, and data privatisation. Then the advantage and drawback of these methods are discussed later. The performance comparison of these mechanisms is also provided. Finally it adopts several benchmarks to illustrate the differences of optimisations. The experimental results show that the best optimising programme and the worst optimising programme have the speedup of 26 and 2,200, respectively.