Can high performance software DSM systems designed with InfiniBand features benefit from PCI-Express?

  • Authors:
  • R. Noronha;D. K. Panda

  • Affiliations:
  • Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA;Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA

  • Venue:
  • CCGRID '05 Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid'05) - Volume 2 - Volume 02
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The performance of software distributed shared memory systems has traditionally been lower than other programming models primarily because of overhead in the coherency protocol, communication bottlenecks and slow networks. Software DSMs have benefited from interconnection technologies like Myrinet, InfiniBand and Quadrics, which offer low latency and high bandwidth communication. Additionally, some of their features like RDMA and atomic operations have been used to implement some portions of the software DSM protocols directly, further reducing overhead. Such network aware protocols are dependent on characteristics of these networks. The performance of the networks is in turn dependent on the system architecture, specially the I/O bus. PCI-Express the successor to the PCI-X architecture, offers improved latency and bandwidth characteristics. In this paper, we evaluate the impact of using an improved bus technology like PCI-Express, on the performance of software DSM protocols, which use the network features of InfiniBand. We can see a reduction in application execution time of up to 13% at four nodes, when PCI-Express is used instead of PCI-X.