Architectural requirements and scalability of the NAS parallel benchmarks
SC '99 Proceedings of the 1999 ACM/IEEE conference on Supercomputing
Performance Evaluation of Fast Ethernet, Giganet, and Myrinet on a Cluster
ICCS '02 Proceedings of the International Conference on Computational Science-Part I
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
Protocols and Strategies for Optimizing Performance of Remote Memory Operations on Clusters
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
One-sided Communication on the Myrinet-based SMP Clusters using the GM Message-Passing Library
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
Parallel Unstructured AMR and Gigabit Networking for Beowulf-Class Clusters
PPAM '01 Proceedings of the th International Conference on Parallel Processing and Applied Mathematics-Revised Papers
Advanced environments for parallel and distributed applications: a view of current status
Parallel Computing - Special issue: Advanced environments for parallel and distributed computing
Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
International Journal of High Performance Computing and Networking
Performance analysis of interconnection networks for multi-cluster systems
ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part III
Hi-index | 0.00 |
GigaNet and Myrinet are two of the leading interconnects for clusters of commodity computer systems. Both provide memory-protected user-level network interface access, and deliver low-latency and high-bandwidth communication to applications. GigaNet is a connection-oriented interconnect based on a hardware implementation of Virtual Interface (VI)Architecture and Asynchronous Transfer Mode (ATM) technologies. Myrinet is a connection-less interconnect which leverages packet switching technologies from experimental Massively Parallel Processors (MPP) networks. This paper investigates their architectural differences and evaluates their performance on two commodity clusters based on two generations of Symmetric Multiple Processors (SMP) servers. The performance measurements reported here suggest that the implementation of Message Passing Interface (MPI) significantly affects the cluster performance. Although MPICH-GM over Myrinet demonstrates lower latency with small messages, the polling-driven implementation of MPICH-GM often leads to tight synchronization betweencommunication processes and higher CPU overhead.