A novel strategy for building interoperable MPI environment in heterogeneous high performance systems

  • Authors:
  • Francisco Isidro Massetto;Liria Matsumoto Sato;Kuan-Ching Li

  • Affiliations:
  • Dept. of Computer Engineering and Digital Systems, Polytechnic School, University of Sao Paulo, Sao Paulo, Brazil 05508-900;Dept. of Computer Engineering and Digital Systems, Polytechnic School, University of Sao Paulo, Sao Paulo, Brazil 05508-900;Dept. of Computer Science and Information Engineering, Providence University, Taichung, Taiwan 43301

  • Venue:
  • The Journal of Supercomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.