Exascale computing technology challenges

  • Authors:
  • John Shalf;Sudip Dosanjh;John Morrison

  • Affiliations:
  • NERSC Division, Lawrence Berkeley National Laboratory, Berkeley, California;Sandia National Laboratories, New Mexico;Los Alamos National Laboratory, Los Alamos, New Mexico

  • Venue:
  • VECPAR'10 Proceedings of the 9th international conference on High performance computing for computational science
  • Year:
  • 2010

Quantified Score

Hi-index 0.02

Visualization

Abstract

High Performance Computing architectures are expected to change dramatically in the next decade as power and cooling constraints limit increases in microprocessor clock speeds. Consequently computer companies are dramatically increasing on-chip parallelism to improve performance. The traditional doubling of clock speeds every 18-24 months is being replaced by a doubling of cores or other parallelism mechanisms. During the next decade the amount of parallelism on a single microprocessor will rival the number of nodes in early massively parallel supercomputers that were built in the 1980s. Applications and algorithms will need to change and adapt as node architectures evolve. In particular, they will need to manage locality to achieve performance. A key element of the strategy as we move forward is the co-design of applications, architectures and programming environments. There is an unprecedented opportunity for application and algorithm developers to influence the direction of future architectures so that they meet DOE mission needs. This article will describe the technology challenges on the road to exascale, their underlying causes, and their effect on the future of HPC system design.