Memory coherence in shared virtual memory systems
ACM Transactions on Computer Systems (TOCS)
Network-based concurrent computing on the PVM system
Concurrency: Practice and Experience
Using MPI: portable parallel programming with the message-passing interface
Using MPI: portable parallel programming with the message-passing interface
Design of the Munin distributed shared memory system
Journal of Parallel and Distributed Computing - Special issue on distributed shared memory systems
CHIME: a versatile distributed parallel processing system
CHIME: a versatile distributed parallel processing system
DynamicPVM - Dynamic Load Balancing on Parallel Systems
HPCN Europe 1994 Proceedings of the nternational Conference and Exhibition on High-Performance Computing and Networking Volume II: Networking and Tools
Persistant Linda: Linda + Transactions + Query Processing
Research Directions in High-Level Parallel Programming Languages
CALYPSO: a novel software system for fault-tolerant parallel processing on distributed platforms
HPDC '95 Proceedings of the 4th IEEE International Symposium on High Performance Distributed Computing
High-Level Fault Tolerance in Distributed Programs
High-Level Fault Tolerance in Distributed Programs
Parallel processing on networks of workstations: a fault-tolerant, high performance approach
ICDCS '95 Proceedings of the 15th International Conference on Distributed Computing Systems
Metacomputing on commodity computers
Metacomputing on commodity computers
Hi-index | 0.00 |
This paper presents the results from running five experiments with the Chime Parallel Processing System. The Chime System is an implementation of the CC++ programming language (parallel part) on a network of computers. Chime offers ease of programming, shared memory, fault tolerance, load balancing and the ability to nest parallel computations. The system has performance comparable with most parallel processing environments. The experiments include a performance experiment (to measure Chime overhead), a load balancing experiment (to show even balancing of work between slow and fast machines), a fault tolerance experiment (to show the effects of multiple machine failures), a recursion experiment (to show how programs can use nesting and recursion) and a fine-grain experiment (to show the viability of executions with fine grain computations.