Performance optimization on a supercomputer with cTuning and the PGI compiler
Proceedings of the 2nd International Workshop on Adaptive Self-Tuning Computing Systems for the Exaflop Era
Comprehensive job level resource usage measurement and analysis for XSEDE HPC systems
Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery
Enabling comprehensive data-driven system management for large computational facilities
SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Hi-index | 0.00 |
NCAR's Bluefire supercomputer is instrumented with a set of low-overhead processes that continually monitor the floating-point counters of its 3,840 batch-compute cores. We extract performance numbers for each batch job by correlating the data from corresponding nodes. From experience and heuristics for good performance, we use this data, in part, to identify poorly performing jobs and then work with the users to improve their job's efficiency. Often, the solution involves simple steps such as spawning an adequate number of processes or threads, binding the processes or threads to cores, using large memory pages, or using adequate compiler optimization. These efforts typically result in performance improvements and a wall-clock runtime reduction of 10% to 20%. With more involved changes to codes and scripts, some users have obtained performance improvements of 40% to 90%. We discuss our instrumentation, some successful cases, and its general applicability to other systems.