Efficient parallel solution of linear systems

  • Authors:
  • V Pan;J Reif

  • Affiliations:
  • -;-

  • Venue:
  • STOC '85 Proceedings of the seventeenth annual ACM symposium on Theory of computing
  • Year:
  • 1985

Quantified Score

Hi-index 0.01

Visualization

Abstract

The most efficient known parallel algorithms for inversion of a nonsingular n × n matrix A or solving a linear system Ax = b over the rationals require &Ogr;(log n)2 time and M(n)n0.5 processors (where M(n) is the number of processors required in order to multiply two n × n rational matrices in time &Ogr;(log n).) Furthermore, all known polylog time algorithms for those problems are unstable: they require the calculation to be done with perfect precision; otherwise they give no results at all.This paper describes parallel algorithms that have good numerical stability and remain efficient as n grows large. In particular, we describe a quadratically convergent iterative method that gives the inverse (within the relative precision 2-nO(1)) of an n × n rational matrix A with condition ≤ n0(1) in &Ogr;(log n)2 time using M(n) processors. This is the optimum processor bound and the factor n0.5 improvement of known processor bounds for polylog time matrix inversion. It is the first known polylog time algorithm that is numerically stable. The algorithm relies on our method of computing an approximate inverse of A that involves &Ogr;(log n) parallel steps and n2 processors.Also, we give a parallel algorithm for solution of a linear system Ax = b with a sparse n × n symmetric positive definite matrix A. If the graph G(A) (which has n vertices and has an edge for each nonzero entry of A) is s(n)-separable, then our algorithm requires only &Ogr;((log n)(log s(n))2) time and |E| + M(s(n)) processors. The algorithm computes a recursive factorization of A so that the solution of any other linear system Ax = b′ with the same matrix A requires only &Ogr;(log n log s(n)) time and |E| + s(n)2 processors.