A geometric theory for preconditioned inverse iteration applied to a subspace
Mathematics of Computation
The design and implementation of the MRRR algorithm
ACM Transactions on Mathematical Software (TOMS)
Null space and eigenspace computations with additive preprocessing
Proceedings of the 2007 international workshop on Symbolic-numeric computation
Algorithm 880: A testing infrastructure for symmetric tridiagonal eigensolvers
ACM Transactions on Mathematical Software (TOMS)
Solving the quadratic trust-region subproblem in a low-memory BFGS framework
Optimization Methods & Software - THE JOINT EUROPT-OMS CONFERENCE ON OPTIMIZATION, 4-7 JULY, 2007, PRAGUE, CZECH REPUBLIC, PART I
ACM Transactions on Mathematical Software (TOMS)
An Improved Arc Algorithm for Detecting Definite Hermitian Pairs
SIAM Journal on Matrix Analysis and Applications
Computing eigenvectors of block tridiagonal matrices based on twisted block factorizations
Journal of Computational and Applied Mathematics
Technical Communique: Upper bounds for the solution of the discrete algebraic Lyapunov equation
Automatica (Journal of IFAC)
An overview on the eigenvalue computation for matrices
Neural, Parallel & Scientific Computations
Detecting Localization in an Invariant Subspace
SIAM Journal on Scientific Computing
Hi-index | 0.00 |
The purpose of this paper is two-fold: to analyze the behavior of inverse iteration for computing a single eigenvector of a complex square matrix and to review Jim Wilkinson's contributions to the development of the method. In the process we derive several new results regarding the convergence of inverse iteration in exact arithmetic.In the case of normal matrices we show that residual norms decrease strictly monotonically. For eighty percent of the starting vectors a single iteration is enough.In the case of non-normal matrices, we show that the iterates converge asymptotically to an invariant subspace. However, the residual norms may not converge. The growth in residual norms from one iteration to the next can exceed the departure of the matrix from normality. We present an example where the residual growth is exponential in the departure of the matrix from normality. We also explain the often significant regress of the residuals after the first iteration: it occurs when the non-normal part of the matrix is large compared to the eigenvalues of smallest magnitude. In this case computing an eigenvector with inverse iteration is exponentially ill conditioned (in exact arithmetic).We conclude that the behavior of the residuals in inverse iteration is governed by the departure of the matrix from normality rather than by the conditioning of a Jordan basis or the defectiveness of eigenvalues.