Computing an Eigenvector with Inverse Iteration

  • Authors:
  • Ilse C. F. Ipsen

  • Affiliations:
  • -

  • Venue:
  • SIAM Review
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

The purpose of this paper is two-fold: to analyze the behavior of inverse iteration for computing a single eigenvector of a complex square matrix and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic.In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough.In the case of non-normal matrices, we show that the iterates converge asymptotically to an invariant subspace. However, the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the non-normal part of the matrix is large compared to the eigenvalues of smallest magnitude. In this case computing an eigenvector with inverse iteration is exponentially ill conditioned (in exact arithmetic).We conclude that the behavior of the residuals in inverse iteration is governed by the departure of the matrix from normality rather than by the conditioning of a Jordan basis or the defectiveness of eigenvalues.