PVM: a framework for parallel distributed computing
Concurrency: Practice and Experience
Network-based concurrent computing on the PVM system
Concurrency: Practice and Experience
The PVM concurrent computing system: evolution, experiences, and trends
Parallel Computing - Special issue: message passing interfaces
Dynamic load balancing of a multi-cluster simulator on a network of workstations
PADS '95 Proceedings of the ninth workshop on Parallel and distributed simulation
The SPLASH-2 programs: characterization and methodological considerations
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
UNIX network programming, volume 2 (2nd ed.): interprocess communications
UNIX network programming, volume 2 (2nd ed.): interprocess communications
Chameleon: A Software Infrastructure for Adaptive Fault Tolerance
IEEE Transactions on Parallel and Distributed Systems
A Measure of Fault Tolerance for Distributed Networks
ICCI '92 Proceedings of the Fourth International Conference on Computing and Information: Computing and Information
Goals Guiding Design: PVM and MPI
CLUSTER '02 Proceedings of the IEEE International Conference on Cluster Computing
An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces
FRONTIERS '96 Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation
Data Sieving and Collective I/O in ROMIO
FRONTIERS '99 Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation
NetSolve: A Network Server for Solving Computational Science Problems
NetSolve: A Network Server for Solving Computational Science Problems
Shared memory computing on clusters with symmetric multiprocessors and system area networks
ACM Transactions on Computer Systems (TOCS)
Hi-index | 0.00 |
In clusters containing heterogeneous systems, message passing libraries (distributed computing tools) are employed for harnessing the computing and other resources. Task is submitted to a tool and the actual execution is carried out on aggregated network resources. Tools take care of scheduling, distributing subtasks and gathering results along with synchronization and message exchange requirements. They need initialization and synchronization routines for the submitted task. These tools also provide many other features like transparency, fault tolerance and load balancing. Some times all these features or initialization may not be required. The aim of tool designers should be to provide quality performance with add-on request initialization and feature provision. Initialization routines and special features provision take their own time over core distributed computing, affecting overall computational cost. In this paper a two purpose tool (Distributed Task Measure: DTM) is implemented. DTM is primarily used for placing other distributed computing tools on a performance index, judging their startup and performance. DTM may also serve to achieve macro level parallelization where requirements are such.