Computational experience with sequential and parallel, preconditioned Jacobi--Davidson for large, sparse symmetric matrices

  • Authors:
  • Luca Bergamaschi;Giorgio Pini;Flavio Sartoretto

  • Affiliations:
  • Dipartimento di Metodi e Modelli Matematici per le Scienze Applicate, Università di Padova, Via Belzoni, 7, 35131 Padova PD, Italy;Dipartimento di Metodi e Modelli Matematici per le Scienze Applicate, Università di Padova, Via Belzoni, 7, 35131 Padova PD, Italy;Dipartimento di Informatica Università di Venezia, Via Torino 155, 30171 Mestre VE, Italy

  • Venue:
  • Journal of Computational Physics
  • Year:
  • 2003

Quantified Score

Hi-index 31.45

Visualization

Abstract

The Jacobi-Davidson (JD) algorithm was recently proposed for evaluating a number of the eigenvalues of a matrix. JD goes beyond pure Krylov-space techniques; it cleverly expands its search space, by solving the so-called correction equation, thus in principle providing a more powerful method. Preconditioning the Jacobi-Davidson correction equation is mandatory when large, sparse matrices are analyzed. We considered several preconditioners: Classical block-Jacobi, and IC(0), together with approximate inverse (AINV or FSAI) preconditioners. The rationale for using approximate inverse preconditioners is their high parallelization potential, combined with their efficiency in accelerating the iterative solution of the correction equation. Analysis was carried on the sequential performance of preconditioned JD for the spectral decomposition of large, sparse matrices, which originate in the numerical integration of partial differential equations arising in physical and engineering problems. It was found that JD is highly sensitive to preconditioning, and it can display an irregular convergence behavior. We parallelized JD by data-splitting techniques, combining them with techniques to reduce the amount of communication data. Our own parallel, preconditioned code was executed on a dedicated parallel machine, and we present the results of our experiments. Our JD code provides an appreciable parallel degree of computation. Its performance was also compared with those of PARPACK and parallel DACG.