HeteroMPI: Towards a message-passing library for heterogeneous networks of computers

  • Authors:
  • Alexey Lastovetsky;Ravi Reddy

  • Affiliations:
  • Department of Computer Science, University College Dublin, Belfield, Dublin 4, Ireland;Department of Computer Science, University College Dublin, Belfield, Dublin 4, Ireland

  • Venue:
  • Journal of Parallel and Distributed Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper presents Heterogeneous MPI (HeteroMPI), an extension of MPI for programming high-performance computations on heterogeneous networks of computers. It allows the application programmer to describe the performance model of the implemented algorithm in a generic form. This model allows the specification of all the main features of the underlying parallel algorithm, which have an impact on its execution performance. These features include the total number of parallel processes, the total volume of computations to be performed by each process, the total volume of data to be transferred between each pair of the processes, and how exactly the processes interact during the execution of the algorithm. Given a description of the performance model, HeteroMPI tries to create a group of processes that executes the algorithm faster than any other group. The principal extensions to MPI are presented. We demonstrate the features of the library by performing experiments with parallel simulation of the interaction of electric and magnetic fields and parallel matrix multiplication.