Dyn-MPI: Supporting MPI on medium-scale, non-dedicated clusters

  • Authors:
  • D. Brent Weatherly;David K. Lowenthal;Mario Nakazawa;Franklin Lowenthal

  • Affiliations:
  • Department of Computer Science, The University of Georgia, Athens, GA 30602-7404, USA;Department of Computer Science, The University of Georgia, Athens, GA 30602-7404, USA;Department of Mathematics and Computer Science, Berea College, Berea, Kentucky 40404, USA;Department of Computer and Information Science, California State University, Hayward, Hayward, CA 94542, USA

  • Venue:
  • Journal of Parallel and Distributed Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Distributing data is a fundamental problem in implementing efficient distributed-memory parallel programs. The problem becomes more difficult in environments where the participating nodes are not dedicated to a parallel application. We are investigating the data distribution problem in non-dedicated environments in the context of explicit message-passing programs. To address this problem, we have designed and implemented an extension to MPI called dynamic MPI (Dyn-MPI). The key component of Dyn-MPI is its run-time system, which efficiently and automatically redistributes data on the fly when there are changes in the application or the underlying environment. Dyn-MPI supports efficient memory allocation, precise measurement of system load and computation time, and node removal. Performance results show that programs that use Dyn-MPI execute efficiently in non-dedicated environments, including up to almost a threefold improvement compared to programs that do not redistribute data and a 25% improvement over standard adaptive load balancing techniques.