Parallel matrix-vector product using approximate hierarchical methods

  • Authors:
  • Ananth Grama;Vipin Kumar;Ahmed Sameh

  • Affiliations:
  • Department of Computer Science, University of Minnesota, Minneapolis, MN;Department of Computer Science, University of Minnesota, Minneapolis, MN;Department of Computer Science, University of Minnesota, Minneapolis, MN

  • Venue:
  • Supercomputing '95 Proceedings of the 1995 ACM/IEEE conference on Supercomputing
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Matrix-vector products (mat-vecs) form the core of iterative methods used for solving dense linear systems. Often, these systems arise in the solution of integral equations used in electromagnetics, heat transfer, and wave propagation. In this paper, we present a parallel approximate method for computing mat-vecs used in the solution of integral equations. We use this method to compute dense mat-vecs of hundreds of thousands of elements. The combined speedups obtained from the use of approximate methods and parallel processing represent an improvement of several orders of magnitude over exact mat-vecs on uniprocessors. We demonstrate that our parallel formulation incurs minimal parallel processing overhead and scales up to a large number of processors. We study the impact of varying the accuracy of the approximate mat-vec on overall time and on parallel efficiency. Experimental results are presented for 256 processor Cray T3D and Thinking Machines CM5 parallel computers. We have achieved computation rates in excess of 5 GFLOPS on the T3D.