Performance and scalability of MPI on PC clusters: Performances

  • Authors:
  • Glenn R. Luecke;Marina Kraeva;Jing Yuan;Silvia Spanoyannis

  • Affiliations:
  • 291 Durham Center, Iowa State University, Ames, IA 50011, U.S.A.;291 Durham Center, Iowa State University, Ames, IA 50011, U.S.A.;291 Durham Center, Iowa State University, Ames, IA 50011, U.S.A.;291 Durham Center, Iowa State University, Ames, IA 50011, U.S.A.

  • Venue:
  • Concurrency and Computation: Practice & Experience
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The purpose of this paper is to compare the communication performance and scalability of MPI communication routines on a Windows Cluster, a Linux Cluster, a Cray T3E-600, and an SGI Origin 2000. All tests in this paper were run using various numbers of processors and two message sizes. In spite of the fact that the Cray T3E-600 is about 7 years old, it performed best of all machines for most of the tests. The Linux Cluster with the Myrinet interconnect and Myricom's MPI performed and scaled quite well and, in most cases, performed better than the Origin 2000, and in some cases better than the T3E. The Windows Cluster using the Giganet Full Interconnect and MPI/Pro's MPI performed and scaled poorly for small messages compared with all of the other machines. Copyright © 2004 John Wiley & Sons, Ltd.