The symmetric eigenvalue problem
The symmetric eigenvalue problem
Accuracy and Stability of Numerical Algorithms
Accuracy and Stability of Numerical Algorithms
SIAM Journal on Matrix Analysis and Applications
The Scaling and Squaring Method for the Matrix Exponential Revisited
SIAM Journal on Matrix Analysis and Applications
Matlab Guide
Optimality Measures for Performance Profiles
SIAM Journal on Optimization
Error bounds from extra-precise iterative refinement
ACM Transactions on Mathematical Software (TOMS)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics)
A New Scaling and Squaring Algorithm for the Matrix Exponential
SIAM Journal on Matrix Analysis and Applications
Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators
SIAM Journal on Scientific Computing
Hi-index | 0.00 |
It is a widespread but little-noticed phenomenon that the normwise relative error ‖x - y‖/‖x‖ of vectors x and y of floating point numbers of the same precision, where y is an approximation to x, can be many orders of magnitude smaller than the unit roundoff. We analyze this phenomenon and show that in the ∞-norm it happens precisely when x has components of widely varying magnitude and every component of x of largest magnitude agrees with the corresponding component of y. Performance profiles are a popular way to compare competing algorithms according to particular measures of performance. We show that performance profiles based on normwise relative errors can give a misleading impression due to the influence of zero or tiny normwise relative errors. We propose a transformation that reduces the influence of these extreme errors in a controlled manner, while preserving the monotonicity of the underlying data and leaving the performance profile unchanged at its left end-point. Numerical examples with both artificial and genuine data illustrate the benefits of the transformation.