Table-driven implementation of the logarithm function in IEEE floating-point arithmetic
ACM Transactions on Mathematical Software (TOMS)
Table-driven implementation of the Expm1 function in IEEE floating-point arithmetic
ACM Transactions on Mathematical Software (TOMS)
Table-driven implementation of the exponential function in IEEE floating-point arithmetic
ACM Transactions on Mathematical Software (TOMS)
Elementary functions: algorithms and implementation
Elementary functions: algorithms and implementation
Theorems on Efficient Argument Reductions
ARITH '03 Proceedings of the 16th IEEE Symposium on Computer Arithmetic (ARITH-16'03)
Hi-index | 14.98 |
Abstract--A common practice for computing an elementary transcendental function in an libm implementation nowadays has two phases: reductions of input arguments to fall into a tiny interval and polynomial approximations for the function within the interval. Typically, the interval is made tiny enough so that polynomials of very high degree aren't required for accurate approximations. Often, approximating polynomials as such are taken to be the best polynomials or any others such as the Chebyshev interpolating polynomials. The best polynomial of degree n has the property that the biggest difference between it and the function is smallest among all possible polynomials of degrees no higher than n. Thus, it is natural to choose the best polynomials over others. In this paper, it is proven that the best polynomial can only be more accurate by at most a fractional bit than the Chebyshev interpolating polynomial of the same degree in computing elementary functions or, in other words, the Chebyshev interpolating polynomials will do just as well as the best polynomials. Similar results were obtained in 1967 by Powell who, however, did not target elementary function computations in particular and placed no assumption on the function and, remarkably, whose results imply accuracy differences of no more than 2 to 3 bits in the context of this paper.