Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Simplified expression of message-driven programs and quantification of their impact on performance
Simplified expression of message-driven programs and quantification of their impact on performance
The Nexus approach to integrating multithreading and communication
Journal of Parallel and Distributed Computing - Special issue on multithreading for multiprocessors
Dynamic resource management on distributed systems using reconfigurable applications
IBM Journal of Research and Development - Special issue: performance analysis and its impact on design
Processor allocation in multiprogrammed distributed-memory parallel computer systems
Journal of Parallel and Distributed Computing
The grid: blueprint for a new computing infrastructure
The grid: blueprint for a new computing infrastructure
The grid
A ghost cell expansion method for reducing communications in solving PDE problems
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
Fast Messages: Efficient, Portable Communication for Workstation Clusters and MPPs
IEEE Parallel & Distributed Technology: Systems & Technology
Globe: A Wide-Area Distributed System
IEEE Concurrency
Capacity and Capability Computing Using Legion
ICCS '01 Proceedings of the International Conference on Computational Sciences-Part I
MPICH/MADIII: a Cluster of Clusters Enabled MPI Implementation
CCGRID '03 Proceedings of the 3st International Symposium on Cluster Computing and the Grid
Studying Protein Folding on the Grid: Experiences Using CHARMM on NPACI Resources under Legion
HPDC '01 Proceedings of the 10th IEEE International Symposium on High Performance Distributed Computing
MPICH-G2: a Grid-enabled implementation of the Message Passing Interface
Journal of Parallel and Distributed Computing - Special issue on computational grids
Legion: The Next Logical Step Toward a Nationwide Virtual Computer
Legion: The Next Logical Step Toward a Nationwide Virtual Computer
Active messages: an efficient communication architecture for multiprocessors
Active messages: an efficient communication architecture for multiprocessors
The Grid 2: Blueprint for a New Computing Infrastructure
The Grid 2: Blueprint for a New Computing Infrastructure
Faucets: Efficient Resource Allocation on the Computational Grid
ICPP '04 Proceedings of the 2004 International Conference on Parallel Processing
Highly Latency Tolerant Gaussian Elimination
GRID '05 Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing
Proceedings of the 15th ACM Mardi Gras conference: From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities
Hi-index | 0.00 |
One of the attractive features of Grid computing is that resources in geographically distant places can be mobilized to meet computational needs as they arise. A particularly challenging issue is that of executing a single application across multiple machines that are separated by large distances. While certain classes of applications such as pipeline style or master-slave style applications may run well in Grid computing environments with little or no modification, tightly-coupled applications require significant work to achieve good performance. In this paper, we demonstrate that message-driven objects, implemented in the Charm++ and Adaptive MPI systems, can be used to mask the effects of latency in Grid computing environments without requiring modification of application software. We examine a simple five-point stencil decomposition application as well as a more complex molecular dynamics application running in an environment in which arbitrary artificial latencies can be induced between pairs of nodes. Performance of the applications running under artificial latencies are compared to the performance of the applications running across TeraGrid nodes located at the National Center for Supercomputing Applications and Argonne National Laboratory.