A scalable method for run-time loop parallelization
International Journal of Parallel Programming
Exploitation of symbolic information in interprocedural dependence analysis
Parallel Computing
IEEE Transactions on Parallel and Distributed Systems
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
ISHPC '00 Proceedings of the Third International Symposium on High Performance Computing
Automatic multilevel parallelization using OpenMP
Scientific Programming - OpenMP
International Journal of Computer Mathematics - Distributed Algorithms in Science and Engineering
Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore
IWOMP '09 Proceedings of the 5th International Workshop on OpenMP: Evolving OpenMP in an Age of Extreme Parallelism
Experiences Developing the OpenUH Compiler and Runtime Infrastructure
International Journal of Parallel Programming
Hi-index | 0.00 |
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.