Reducing the influence of tiny normwise relative errors on performance profiles

  • Authors:
  • Nicholas J. Dingle;Nicholas J. Higham

  • Affiliations:
  • The University of Manchester, Manchester, UK;The University of Manchester, Manchester, UK

  • Venue:
  • ACM Transactions on Mathematical Software (TOMS)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is a widespread but little-noticed phenomenon that the normwise relative error ‖x - y‖/‖x‖ of vectors x and y of floating point numbers of the same precision, where y is an approximation to x, can be many orders of magnitude smaller than the unit roundoff. We analyze this phenomenon and show that in the ∞-norm it happens precisely when x has components of widely varying magnitude and every component of x of largest magnitude agrees with the corresponding component of y. Performance profiles are a popular way to compare competing algorithms according to particular measures of performance. We show that performance profiles based on normwise relative errors can give a misleading impression due to the influence of zero or tiny normwise relative errors. We propose a transformation that reduces the influence of these extreme errors in a controlled manner, while preserving the monotonicity of the underlying data and leaving the performance profile unchanged at its left end-point. Numerical examples with both artificial and genuine data illustrate the benefits of the transformation.