Taming complexity in high performance computing

  • Authors:
  • Rod Oldehoeft

  • Affiliations:
  • Los Alamos National Laboratory, Los Alamos, NM

  • Venue:
  • Computational science, mathematics and software
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Today's high-performance computing environments, and the applications that must exploit them, have become much more complex than ever. We now build ensembles of large, shared-memory parallel computers, linked together with high-speed networks, in an attempt to achieve previously unheard-of speeds and still retain a "general-purpose" capability for running diverse applications.Demands for greater precision and realism in today's computer simulations of physical phenomena tax the imagination of the most aggressive system designers. Enhancing the accuracy of tomorrow's simulations requires simultaneously accounting for more physical, chemical and biological components. Predictive simulation is essential for making informed, science-based decisions on questions of national importance, including stockpile stewardship, global climate change, wildfires, earthquakes and epidemics. These applications are too massive and inter-related to be built, verified, tuned and maintained by conventional methods.Software teams at Los Alamos National Laboratory, along with collaborators world-wide, are building an integrated software infrastructure for scientific simulation development. This paper describes the projects now underway in object-oriented frameworks, scalable run-time software, scientific visualization, software component architecture, and high-end and experimental computer systems. We include achieved results and the status of projects.