Mapping communication layouts to network hardware characteristics on massive-scale blue gene systems

  • Authors:
  • Pavan Balaji;Rinku Gupta;Abhinav Vishnu;Pete Beckman

  • Affiliations:
  • Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA 60439;Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, USA 60439;High Performance Computing Group, Pacific Northwest National Laboratory, Richland, USA 99352;Argonne Leadership Computing Facility, Argonne National Laboratory, Argonne, USA 60439

  • Venue:
  • Computer Science - Research and Development
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

For parallel applications running on high-end computing systems, which processes of an application get launched on which processing cores is typically determined at application launch time without any information about the application characteristics. As high-end computing systems continue to grow in scale, however, this approach is becoming increasingly infeasible for achieving the best performance. For example, for systems such as IBM Blue Gene and Cray XT that rely on flat 3D torus networks, process communication often involves network sharing, even for highly scalable applications. This causes the overall application performance to depend heavily on how processes are mapped on the network. In this paper, we first analyze the impact of different process mappings on application performance on a massive Blue Gene/P system. Then, we match this analysis with application communication patterns that we allow applications to describe prior to being launched. The underlying process management system can use this combined information in conjunction with the hardware characteristics of the system to determine the best mapping for the application. Our experiments study the performance of different communication patterns, including 2D and 3D nearest-neighbor communication and structured Cartesian grid communication. Our studies, that scale up to 131,072 cores of the largest BG/P system in the United States (using 80% of the total system size), demonstrate that different process mappings can show significant difference in overall performance, especially on scale. For example, we show that this difference can be as much as 30% for P3DFFT and up to twofold for HALO. Through our proposed model, however, such differences in performance can be avoided so that the best possible performance is always achieved.