Fast sparse matrix-vector multiplication for TeraFlop/s computers

  • Authors:
  • Gerhard Wellein;Georg Hager;Achim Basermann;Holger Fehske

  • Affiliations:
  • Regionales Rechenzentrum Erlangen, Erlangen, Germany;Regionales Rechenzentrum Erlangen, Erlangen, Germany;C&C Research Laboratories, NEC Europe Ltd, Sankt Augustin, Germany;Institut für Physik, Universität Greifswald, Greifswald, Germany

  • Venue:
  • VECPAR'02 Proceedings of the 5th international conference on High performance computing for computational science
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

Eigenvalue problems involving very large sparse matrices are common to various fields in science. In general, the numerical core of iterative eigenvalue algorithms is a matrix-vector multiplication (MVM) involving the large sparse matrix. We present three different programming approaches for parallel MVM on present day supercomputers. In addition to a pure message-passing approach, two hybrid parallel implementations are introduced based on simultaneous use of message-passing and shared-memory programming models. For a modern SMP cluster (HITACHI SR8000) performance and scalability of the hybrid implementations are discussed and compared with the pure message-passing approach on massively-parallel systems (CRAY T3E), vector computers (NEC SX5e) and distributed shared-memory systems (SGI Origin3800).