Computer
FPS T Series parallel processor
Programming parallel processors
Orca: A Language for Parallel Programming of Distributed Systems
IEEE Transactions on Software Engineering
Portable programming with the PARMACS message-passing library
Parallel Computing - Special issue: message passing interfaces
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
Communicating sequential processes
Communications of the ACM
Portable checkpointing and recovery
HPDC '95 Proceedings of the 4th IEEE International Symposium on High Performance Distributed Computing
The Hector Distributed Run-Time Environment
IEEE Transactions on Parallel and Distributed Systems
A Task Migration Implementation of the Message-Passing Interface
HPDC '96 Proceedings of the 5th IEEE International Symposium on High Performance Distributed Computing
Hi-index | 0.00 |
Many institutions already have networks of workstations, which could potentially be harnessed as a powerful parallel processing resource. A new, automatic task allocation system has been built on top of MPI, an environment that permits parallel programming by using the message-passing paradigm and implemented in extensions to C and FORTRAN. This system, known as ``Hector'', supports dynamic migration of tasks and automatic run-time performance optimization. MPI programs can be run without modification under Hector, and can be run on existing networks of workstations. Thus Hector permits institutions to harness existing computational resources quickly and transparently.