Guided self-scheduling: A practical scheduling scheme for parallel supercomputers
IEEE Transactions on Computers
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Introduction to parallel algorithms and architectures: array, trees, hypercubes
Synchronization and communication in the T3E multiprocessor
Proceedings of the seventh international conference on Architectural support for programming languages and operating systems
NAS Experiences of Porting CM Fortran Codes to on IBM SP2 and SGI Power Challenge
IPPS '96 Proceedings of the 10th International Parallel Processing Symposium
Hi-index | 0.00 |
Many programming models for massively parallel machines exist, andeach has its advantages and disadvantages. In this article wepresent a programming model that combines features from otherprogramming models that (1) can be efficiently implemented onpresent and future Cray Research massively parallel processor (MPP)systems and (2) are useful in constructing highly parallelprograms. The model supports several styles of programming:message-passing, data parallel, global address (shared data), andwork-sharing. These styles may be combined within the same program.The model includes features that allow a user to define a programin terms of the behavior of the system as a whole, where thebehavior of individual tasks is implicit from this systemicdefinition. (In general, features marked as shared are designed tosupport this perspective.) It also supports an oppositeperspective, where a program may be defined in terms of thebehaviors of individual tasks, and a program is implicitly the sumof the behaviors of all tasks. (Features marked as private aredesigned to support this perspective). Users can exploit anycombination of either set of features without ambiguity and thusare free to define a program from whatever perspective is mostappropriate to the problem at hand.