Exascale science: the next frontier in high performance computing

  • Authors:
  • Stephen S. Pawlowski

  • Affiliations:
  • Intel Corporation

  • Venue:
  • Proceedings of the 24th ACM International Conference on Supercomputing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Scientific computation has come into its own as a mature technology in all fields of science and engineering. Never before have we been able to accurately anticipate, analyze, and plan for complex events in the future --- from the analysis of a human cell to the climate change well into the future. In combination with theory and experiment, scientific computation provides a valuable tool for understanding causes as well as identifying solutions as we look at complex systems containing billions of elements. However, we still cannot do it all and there is a need for more computation capacity especially in areas such as the study of biology and medicine, materials science, climate, and national security. The petascale systems (capable of 1015 floating point operations per second) of today have accelerated studies that were not possible 3 years ago --- and address some of the challenges in the areas mentioned. However, indications from researchers are that they would need far more powerful computing tools to meet the ever increasing challenges of an increasingly complex world. Exascale systems (capable of 1018 floating point operations per second), with a processing capability close to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. The question now is --- What does it take to build an exascale system? The path from Teraflops to Petaflops was driven by the growth of multi-core processors. While it is likely that an exascale system will be comprised of millions of cores, just riding the multi-core trend may not allow us to develop a sustainable exascale system. A number of challenges surface as we start increasing the number of cores in a CPU (Central Processing Unit). The first and most pressing issue is power consumption. Assuming today's technology to build an exascale system, it would consume over a Gigawatt of power. Other issues such as software scalability, memory, IO, and storage bandwidth, and system resiliency stem from the fact that processing power is outpacing the capabilities of all the surrounding technologies. But the sky isn't really falling. While it appears like there are no ideal solutions today, new approaches will emerge that will provide fundamental breakthroughs in hardware technology, parallel programming, and resiliency. In this talk, the speaker addresses the challenges that we face as we take on the task of developing an exascale system along with the technical shifts needed to mitigate some of these challenges.