Matrix analysis
Expokit: a software package for computing matrix exponentials
ACM Transactions on Mathematical Software (TOMS)
A Schur--Fréchet Algorithm for Computing the Logarithm and Exponential of a Matrix
SIAM Journal on Matrix Analysis and Applications
A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra
SIAM Journal on Matrix Analysis and Applications
Accuracy and Stability of Numerical Algorithms
Accuracy and Stability of Numerical Algorithms
A Schur-Parlett Algorithm for Computing Matrix Functions
SIAM Journal on Matrix Analysis and Applications
The Scaling and Squaring Method for the Matrix Exponential Revisited
SIAM Journal on Matrix Analysis and Applications
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
SIAM Journal on Matrix Analysis and Applications
Hi-index | 0.00 |
The scaling and squaring method for the matrix exponential is based on the approximation $e^A \approx (r_m(2^{-s}A))^{2^s}$, where $r_m(x)$ is the $[m/m]$ Padé approximant to $e^x$ and the integers $m$ and $s$ are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed overscaling, in which a value of $s$ much larger than necessary is chosen, causing a loss of accuracy in floating point arithmetic. Building on the scaling and squaring algorithm of Higham [SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1179-1193], which is used by the MATLAB function expm, we derive a new algorithm that alleviates the overscaling problem. Two key ideas are employed. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of from powers of $r_m$. The second idea is to base the backward error analysis that underlies the algorithm on members of the sequence $\{\|A^k\|^{1/k}\}$ instead of $\|A\|$, since for nonnormal matrices it is possible that $\|A^k\|^{1/k}$ is much smaller than $\|A\|$, and indeed this is likely when overscaling occurs in existing algorithms. The terms $\|A^k\|^{1/k}$ are estimated without computing powers of $A$ by using a matrix 1-norm estimator in conjunction with a bound of the form $\|A^k\|^{1/k} \le \max\bigl( \|A^p\|^{1/p}, \|A^q\|^{1/q} \bigr)$ that holds for certain fixed $p$ and $q$ less than $k$. The improvements to the truncation error bounds have to be balanced by the potential for a large $\|A\|$ to cause inaccurate evaluation of $r_m$ in floating point arithmetic. We employ rigorous error bounds along with some heuristics to ensure that rounding errors are kept under control. Our numerical experiments show that the new algorithm generally provides accuracy at least as good as the existing algorithm of Higham at no higher cost, while for matrices that are triangular or cause overscaling it usually yields significant improvements in accuracy, cost, or both.