Computer
Performance Prediction and Calibration for a Class of Multiprocessors
IEEE Transactions on Computers
Introduction to Parallel & Vector Solution of Linear Systems
Introduction to Parallel & Vector Solution of Linear Systems
Molecule: A Language Construct for Layered Development of Parallel Programs
IEEE Transactions on Software Engineering
Visualizing Performance Debugging
Computer
MUPPET—a programming environment of message-based multiprocessors
ACM '86 Proceedings of 1986 ACM Fall joint computer conference
CODE: A Unified Approach to Parallel Programming
IEEE Software
Automatic program restructuring for high-speed computation
CONPAR '81 Proceedings of the Conference on Analysing Problem Classes and Programming for Parallel Computing
Solving traveling salesman problem on cluster compute nodes
WSEAS Transactions on Computers
Hi-index | 0.00 |
Most supercomputers today are parallel computers. In this paper, an approach for efficiently mapping parallel applications onto parallel MIMD machine architectures is introduced. The applicability of this approach to uniform memory access multiprocessors is demonstrated. The paper shows that an intermediate layer of abstraction between the application level and the parallel architecture level is conducive to not only a better software productivity, but also to performance efficient programs. The intermediate layer consists of a set of commonly used parallel programming paradigms (implementation machines). A mathematical representation and a pragmatic representation are provided for each implementation machine (IM). The user maps the application on to one or a set of IMs and the system implements the IMs efficiently on the underlying parallel machine.To illustrate the power of the above concept two different ways of parallelizing a molecular dynamics algorithm (master-slave and pipeline) on a shared memory multiprocessor are presented. Mathematical models and pragmatic representations are developed for both these implementation machines. The utility of the IM approach to parallel programming is demonstrated. Students in a graduate course on parallel processing were asked to write parallel programs using the IM approach. The results of this experiment indicate that the IM approach indeed leads to performance efficient parallel programs.