Computer
UNIX network programming
Orca: A Language for Parallel Programming of Distributed Systems
IEEE Transactions on Software Engineering
Monitors, messages, and clusters: the p4 parallel programming system
Parallel Computing - Special issue: message passing interfaces
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
Heterogeneous Distributed Shared Memory
IEEE Transactions on Parallel and Distributed Systems
Hector: Automated Task Allocation for MPI
IPPS '96 Proceedings of the 10th International Parallel Processing Symposium
The Hector Distributed Run-Time Environment
IEEE Transactions on Parallel and Distributed Systems
The implementation of dynamite: an environment for migrating PVM tasks
ACM SIGOPS Operating Systems Review
A checkpointing strategy for scalable recovery on distributed parallel systems
SC '97 Proceedings of the 1997 ACM/IEEE conference on Supercomputing
Hector: An Agent-Based Architecture for Dynamic Resource Management
IEEE Concurrency
Experiments with Migration of Message-Passing Tasks
GRID '00 Proceedings of the First IEEE/ACM International Workshop on Grid Computing
Performance Measurements on Dynamite/DPVM
Proceedings of the 7th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Surfing the Grid - Dynamic Task Migration in the Polder Metacomputer Project
Proceedings of the 9th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Portable library of migratable sockets
Scientific Programming
Satin: A high-level and efficient grid programming model
ACM Transactions on Programming Languages and Systems (TOPLAS)
Hi-index | 0.00 |
In order to use networks of workstations in parallel processing applications, several schemes have been devised to allow processes on different, possibly heterogeneous, platforms to communicate with one another. MPI is one such scheme that allows for message-passing across different architectures. The MPI specification does not make provisions for the migration of a process between machines. This paper describes the work required to modify an MPI implementation to allow for task migration. It also describes "Hector", our heterogeneous computing task allocator that is used to migrate tasks automatically and improve overall performance of a parallel program.