Toward Exascale Resilience

  • Authors:
  • Franck Cappello;Al Geist;Bill Gropp;Laxmikant Kale;Bill Kramer;Marc Snir

  • Affiliations:
  • INRIA, LABORATOIRE EN RECHERCHE INFORMATIQUE, FRANCE,;OAK RIDGE NATIONAL LABORATORY, TN, USA;DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY OF ILLINOISAT URBANA-CHAMPAIGN, USA;DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY OF ILLINOISAT URBANA-CHAMPAIGN, USA;NERSC, LAWRENCE BERKELEY NATIONAL LABORATORY, IL, USA;DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY OF ILLINOISAT URBANA-CHAMPAIGN, USA

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Over the past few years resilience has became a major issue for high-performance computing (HPC) systems, in particular in the perspective of large petascale systems and future exascale systems. These systems will typically gather from half a million to several millions of central processing unit (CPU) cores running up to a billion threads. From the current knowledge and observations of existing large systems, it is anticipated that exascale systems will experience various kind of faults many times per day. It is also anticipated that the current approach for resilience, which relies on automatic or application level checkpoint/ restart, will not work because the time for checkpointing and restarting will exceed the mean time to failure of a full system. This set of projections leaves the community of fault tolerance for HPC systems with a difficult challenge: finding new approaches, which are possibly radically disruptive, to run applications until their normal termination, despite the essentially unstable nature of exascale systems. Yet, the community has only five to six years to solve the problem. This white paper synthesizes the motivations, observations and research issues considered as determinant of several complimentary experts of HPC in applications, programming models, distributed systems and system management.