Characterization and modeling of PIDX parallel I/O for performance optimization

  • Authors:
  • Sidharth Kumar;Avishek Saha;Venkatram Vishwanath;Philip Carns;John A. Schmidt;Giorgio Scorzelli;Hemanth Kolla;Ray Grout;Robert Latham;Robert Ross;Michael E. Papkafa;Jacqueline Chen;Valerio Pascucci

  • Affiliations:
  • University of Utah, Salt Lake City, UT;University of Utah, Salt Lake City, UT;Argonne National Laboratory, Argonne, IL;Argonne National Laboratory, Argonne, IL;University of Utah, Salt Lake City, UT;University of Utah, Salt Lake City, UT;Sandia National Laboratories, Livermore, CA;National Renewable Energy Laboratory, Golden, CO;Argonne National Laboratory, Argonne, IL;University of Utah, Salt Lake City, UT;Argonne National Laboratory, Argonne, IL and Northern Illinois University, DeKalb, IL;Sandia National Laboratories, Livermore, CA;University of Utah, Salt Lake City, UT and Pacific Northwest National Laboratory, Richland, WA

  • Venue:
  • SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Parallel I/O library performance can vary greatly in response to user-tunable parameter values such as aggregator count, file count, and aggregation strategy. Unfortunately, manual selection of these values is time consuming and dependent on characteristics of the target machine, the underlying file system, and the dataset itself. Some characteristics, such as the amount of memory per core, can also impose hard constraints on the range of viable parameter values. In this work we address these problems by using machine learning techniques to model the performance of the PIDX parallel I/O library and select appropriate tunable parameter values. We characterize both the network and I/O phases of PIDX on a Cray XE6 as well as an IBM Blue Gene/P system. We use the results of this study to develop a machine learning model for parameter space exploration and performance prediction.