MULTILISP: a language for concurrent symbolic computation
ACM Transactions on Programming Languages and Systems (TOPLAS)
An overview of the SR language and implementation
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communications of the ACM
The structure of parafrase-2: an advanced parallelizing compiler for C and FORTRAN
Selected papers of the second workshop on Languages and compilers for parallel computing
A report on the Sisal language project
Journal of Parallel and Distributed Computing - Special issue: data-flow processing
How to write parallel programs: a first course
How to write parallel programs: a first course
Chare kernel—a runtime support system for parallel computations
Journal of Parallel and Distributed Computing
Concurrent programming: principles and practice
Concurrent programming: principles and practice
Data-parallel programming on MIMD computers
Data-parallel programming on MIMD computers
Algorithmic skeletons: structured management of parallel computation
Algorithmic skeletons: structured management of parallel computation
Orca: A Language for Parallel Programming of Distributed Systems
IEEE Transactions on Software Engineering
An introduction to parallel programming
An introduction to parallel programming
Network-based concurrent computing on the PVM system
Concurrency: Practice and Experience
Retire Fortran?: a debate rekindled
Communications of the ACM
Studying programmer behavior experimentally: the problems of proper methodology
Communications of the ACM
The Enterprise Model for Developing Distributed Applications
IEEE Parallel & Distributed Technology: Systems & Technology
The Enterprise Model for Developing Distributed Applications
IEEE Parallel & Distributed Technology: Systems & Technology
Hi-index | 0.00 |
The growth of commercial and academic interest in parallel and distributed computing during the past fifteen years has been accompanied by a corresponding increase in the number of available parallel programming systems, and in the variety of approaches to parallel programming being taken. However, little or no work has been done to compare or evaluate different systems, or to develop criteria by which such comparisons could be made. As a result, a typical parallel programming system is usually evaluated by the ease or difficulty with which its author(s) can implement a small set of trivially-parallel algorithms.This paper is a step toward rectifying this situation. We present several criteria by which parallel programming systems might be quantitatively evaluated, and assess the importance and measurability of each. Of these criteria, we feel that usability is the most important, but also the least frequently quantified. For illustration, we compare the approach taken in the Enterprise parallel programming environment with several other systems and their approaches. We also predict the results we expect from these comparisons. Finally, we argue that while the cost of performing quantitative measurements of usability might seem large, the cost of not performing them, as borne by a group which selects an inappropriate or low-performing programming system, is likely to be much larger.