Co-array Fortran for parallel programming
ACM SIGPLAN Fortran Forum
OpenMP: An Industry-Standard API for Shared-Memory Programming
IEEE Computational Science & Engineering
Standard Templates Adaptive Parallel Library (STAPL)
LCR '98 Selected Papers from the 4th International Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers
Code Generation in the Polytope Model
PACT '98 Proceedings of the 1998 International Conference on Parallel Architectures and Compilation Techniques
Code Generation in the Polyhedral Model Is Easier Than You Think
Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques
ParADE: An OpenMP Programming Environment for SMP Cluster Systems
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
UPC: Distributed Shared-Memory Programming
UPC: Distributed Shared-Memory Programming
Towards automatic translation of OpenMP to MPI
Proceedings of the 19th annual international conference on Supercomputing
MapReduce: simplified data processing on large clusters
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Parallel Programmability and the Chapel Language
International Journal of High Performance Computing Applications
MEDEA '07 Proceedings of the 2007 workshop on MEmory performance: DEaling with Applications, systems and architecture
Associative Parallel Containers in STAPL
Languages and Compilers for Parallel Computing
Bsp2omp: A Compiler For Translating Bsp Programs To Openmp
International Journal of Parallel, Emergent and Distributed Systems - Advances in Parallel and Distributed Computational Models
Hi-index | 0.00 |
Parallel programming implementation details often obfuscate the original algorithm and make later algorithm maintenance difficult. Although parallel programming patterns help guide the structured development of parallel programs, they do not necessarily avoid the code obfuscation problem. In this paper, we observe how emerging and existing programming models realized as programming languages, preprocessor directives, and/or libraries are able to support the Implementation Strategy Patterns that were proposed as part of the Our Pattern Language. We posit that these emerging programming models are in some cases able to avoid code obfuscation through features that prevent tangling of algorithm and implementation details for parallelization and performance optimizations. We qualitatively evaluate these features in terms of how much they prevent tangling and how well they provide programmer-control over implementation details. We conclude with remarks on potential research directions for studying how to produce efficient and maintainable parallel programs by separating algorithm from implementation.