Recent trends in the marketplace of high performance computing

  • Authors:
  • Erich Strohmaier;Jack J. Dongarra;Hans W. Meuer;Horst D. Simon

  • Affiliations:
  • CRD, Lawrence Berkeley National Laboratory, Berkeley, CA;Department of Computer Science, University of Tennessee, Knoxville, TN and Mathematical Science Section, Oak Ridge National Lab., Oak Ridge, TN;Computing Center, University of Mannheim, Mannheim, Germany;Lawrence Berkeley National Laboratory, Berkeley, CA

  • Venue:
  • Parallel Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we analyze major recent trends and changes in the High Performance Computing (HPC) market place. The introduction of vector computers started the area of 'Supercomputing'. The initial success of vector computers in the seventies was driven by raw performance. Massive parallel systems (MPP) became successful in the early nineties due to their better price/performance ratios, which was enabled by the attack of the 'killer-micros'. The success of microprocessor based on the shared memory concept (referred to as symmetric multiprocessors (SMP)) even for the very high-end systems, was the basis for the emerging cluster concepts in the early 2000s. Within the first half of this decade clusters of PC's and workstations have become the prevalent architecture for many HPC application areas on all ranges of performance. However, the Earth Simulator vector system demonstrated that many scientific applications could benefit greatly from other computer architectures. At the same time there is renewed broad interest in the scientific HPC community for new hardware architectures and new programming paradigms. The IBM BlueGene/L system is one early example of a shifting design focus for large-scale system. The DARPA HPCS program has the declared goal of building a Petaflops computer system by the end of the decade using novel computer architectures.