Implementing MPI on windows: comparison with common approaches on Unix

  • Authors:
  • Jayesh Krishna;Pavan Balaji;Ewing Lusk;Rajeev Thakur;Fabian Tillier

  • Affiliations:
  • Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;Microsoft Corporation, Redmond, WA

  • Venue:
  • EuroMPI'10 Proceedings of the 17th European MPI users' group meeting conference on Recent advances in the message passing interface
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Commercial HPC applications are often run on clusters that use the Microsoft Windows operating system and need an MPI implementation that runs efficiently in the Windows environment. The MPI developer community, however, is more familiar with the issues involved in implementing MPI in a Unix environment. In this paper, we discuss some of the differences in implementing MPI on Windows and Unix, particularly with respect to issues such as asynchronous progress, process management, shared-memory access, and threads. We describe how we implement MPICH2 on Windows and exploit these Windows-specific features while still maintaining large parts of the code common with the Unix version. We also present performance results comparing the performance of MPICH2 on Unix and Windows on the same hardware. For zero-byte MPI messages, we measured excellent shared-memory latencies of 240 and 275 nanoseconds on Unix and Windows, respectively.