A review of algebraic multigrid
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. VII: partial differential equations
Distributed Computing in a Heterogeneous Computing Environment
Proceedings of the 5th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
A Distributed Computing Center Software for the Efficient Use of Parallel Computer Systems
HPCN Europe 1994 Proceedings of the nternational Conference and Exhibition on High-Performance Computing and Networking Volume II: Networking and Tools
MPICH/MADIII: a Cluster of Clusters Enabled MPI Implementation
CCGRID '03 Proceedings of the 3st International Symposium on Cluster Computing and the Grid
A Message Passing Interface Library for Inhomogeneous Coupled Clusters
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
MPICH-G2: a Grid-enabled implementation of the Message Passing Interface
Journal of Parallel and Distributed Computing - Special issue on computational grids
A distributed bio-inspired method for multisite grid mapping
Applied Computational Intelligence and Soft Computing - Special issue on theory and applications of evolutionary computation
The new multidevice architecture of MetaMPICH in the context of other approaches to grid-enabled MPI
EuroPVM/MPI'06 Proceedings of the 13th European PVM/MPI User's Group conference on Recent advances in parallel virtual machine and message passing interface
Hi-index | 0.00 |
Running large MPI-applications with resource demands exceeding the local site's cluster capacity could be distributed across a number of clusters in a Grid instead, to satisfy the demand. However, there are a number of drawbacks limiting the applicability of this approach: communication paths between compute nodes of different clusters usually provide lower bandwidth and higher latency than the cluster internal ones, MPI libraries use dedicated I/O-nodes for inter-cluster communication which become a bottleneck, missing tools for co-ordinating the availability of the different clusters across different administrative domains is another issue. To make the Grid approach efficient several prerequisites must be in place: an implementation of MPI providing high-performance communication mechanisms across the borders of clusters, a network connection with high bandwidth and low latency dedicated to the application, compute nodes made available to the application exclusively, and finally a Grid middleware glueing together everything. In this paper we present work recently completed in the VIOLA project: MetaMPICH, user controlled QoS of clusters and interconnecting network, a MetaScheduling Service and the UNICORE integration.