Theory of linear and integer programming
Theory of linear and integer programming
The structure of parafrase-2: an advanced parallelizing compiler for C and FORTRAN
Selected papers of the second workshop on Languages and compilers for parallel computing
GUM: a portable parallel implementation of Haskell
PLDI '96 Proceedings of the ACM SIGPLAN 1996 conference on Programming language design and implementation
Programming with Divide-and-Conquer Skeletons: A Case Study of FFT
The Journal of Supercomputing
OpenMP: An Industry-Standard API for Shared-Memory Programming
IEEE Computational Science & Engineering
Polaris: Improving the Effectiveness of Parallelizing Compilers
LCPC '94 Proceedings of the 7th International Workshop on Languages and Compilers for Parallel Computing
Iteration Space Tiling for Memory Hierarchies
Proceedings of the Third SIAM Conference on Parallel Processing for Scientific Computing
A Methodology for Deriving Parallel Programs with a Family of Parallel Abstract Machines
Euro-Par '97 Proceedings of the Third International Euro-Par Conference on Parallel Processing
Loop Parallelization in the Polytope Model
CONCUR '93 Proceedings of the 4th International Conference on Concurrency Theory
OPERA: a toolbox for loop parallelization
Proceedings of the First IFIP TC10 International Workshop on Software Engineering for Parallel and Distributed Systems
Patterns and skeletons for parallel and distributed computing
Patterns and skeletons for parallel and distributed computing
A Proposal for a User-Level, Message-Passing Interface in a Distributed Memory Environment
A Proposal for a User-Level, Message-Passing Interface in a Distributed Memory Environment
Compilation of a specialized functional language for massively parallel computers
Journal of Functional Programming
Hi-index | 0.00 |
The idea of the stepwise refinement of a problem specification into an efficient target program dates back to the 1970. With the high-level programming languages available today, it is becoming practical to make every node of the refinement tree executable (and not just the leaves of the tree). Consequently, one can also perform run time experiments with the intermediate programs (not just with the leaves). Our problem domain is scientific computing. We use experiments with the intermediate refinements, written in Haskell, to derive cost predictions for the target programs written in C with MPI. These predictions guide our choices of alternative refinements, as we navigate through the refinement tree. This non-standard use of a cost model introduces two new requirements:*The higher up in the refinement tree a programs is, the less determined its target implementation is, i.e., the less accurate the cost prediction will be. However, the cost data we obtain must (and can) suffice to make a correct choice between alternative refinements, based on a relative comparison of the respective performance predictions. *The costing of an intermediate program happens necessarily on an interpreter, not on the installation on which the target program will run-which is not yet fully determined! Still, the cost model must be calibrated with respect to the real installation-even when it is used for intermediate programs-since our choice of refinement will depend on the characteristics of the real installation. We propose a cost model for the stepwise prototyping of high-performance parallel programs, which satisfies these requirements.