Understanding Checkpointing Overheads on Massive-Scale Systems: Analysis of the IBM Blue Gene/P System

  • Authors:
  • Rinku Gupta;Harish Naik;Pete Beckman

  • Affiliations:
  • Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, USA;Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, USA;Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, USA

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Providing fault tolerance in high-end petascale systems, consisting of millions of hardware components and complex software stacks, is becoming an increasingly challenging task. Checkpointing continues to be the most prevalent technique for providing fault tolerance in such high-end systems. Considerable research has focussed on optimizing checkpointing; however, in practice, checkpointing still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by various applications running on leadership-class machines like the IBM Blue Gene/P at Argonne National Laboratory. In addition to studying popular applications, we design a methodology to help users understand and intelligently choose an optimal checkpointing frequency to reduce the overall checkpointing overhead incurred. In particular, we study the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, the Nek5000 computational fluid dynamics application and the Parallel Ocean Program applicationâ聙聰and analyze their memory usage and possible checkpointing trends on 65,536 processors of the Blue Gene/P system.