Parallel **-Matrix Arithmetics on Shared Memory Systems

  • Authors:
  • R. Kriemann

  • Affiliations:
  • Max-Planck-Institute for Mathematics in the Sciences, Inselstr. 22–26, 04103, Leipzig, Germany

  • Venue:
  • Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

**-matrices, as they were introduced in previous papers, allow the usage of the common matrix arithmetic in an efficient, almost optimal way. This article is concerned with the parallelisation of this arithmetics, in particular matrix building, matrix-vector multiplication, matrix multiplication and matrix inversion.Of special interest is the design of algorithms, which reuse as much as possible of the corresponding sequential methods, thereby keeping the effort to update an existing implementation at a minimum. This could be achieved by making use of the properties of shared memory systems as they are widely available in the form of workstations or compute servers. These systems provide a simple and commonly supported programming interface in the form of POSIX-threads.The theoretical results for the parallel algorithms are confirmed with numerical examples from BEM and FEM applications.