Schur aggregation for linear systems and determinants
Theoretical Computer Science
Proceedings of the 2009 conference on Symbolic numeric computation
Proceedings of the 24th ACM International Conference on Supercomputing
Algorithm 908: Online Exact Summation of Floating-Point Streams
ACM Transactions on Mathematical Software (TOMS)
Accurate evaluation of a polynomial and its derivative in Bernstein form
Computers & Mathematics with Applications
Algorithm engineering: bridging the gap between algorithm theory and practice
Algorithm engineering: bridging the gap between algorithm theory and practice
Accurate Matrix Factorization: Inverse LU and Inverse QR Factorizations
SIAM Journal on Matrix Analysis and Applications
A robust algorithm for geometric predicate by error-free determinant transformation
Information and Computation
Accurate evaluation algorithm for bivariate polynomial in Bernstein-Bézier form
Applied Numerical Mathematics
Accurate evaluation of the k-th derivative of a polynomial and its application
Journal of Computational and Applied Mathematics
Hi-index | 0.01 |
In Part II of this paper we first refine the analysis of error-free vector transformations presented in Part I. Based on that we present an algorithm for calculating the rounded-to-nearest result of $s:=\sum p_i$ for a given vector of floating-point numbers $p_i$, as well as algorithms for directed rounding. A special algorithm for computing the sign of $s$ is given, also working for huge dimensions. Assume a floating-point working precision with relative rounding error unit eps . We define and investigate a $K$-fold faithful rounding of a real number $r$. Basically the result is stored in a vector $\mathtt{Res}_{\nu}$ of $K$ nonoverlapping floating-point numbers such that $\sum\mathtt{Res}_{\nu}$ approximates $r$ with relative accuracy $\mathtt{eps}^K$, and replacing $\mathtt{Res}_K$ by its floating-point neighbors in $\sum\mathtt{Res}_{\nu}$ forms a lower and upper bound for $r$. For a given vector of floating-point numbers with exact sum $s$, we present an algorithm for calculating a $K$-fold faithful rounding of $s$ using solely the working precision. Furthermore, an algorithm for calculating a faithfully rounded result of the sum of a vector of huge dimension is presented. Our algorithms are fast in terms of measured computing time because they allow good instruction-level parallelism, they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require some extra precision. The only operations used are standard floating-point addition, subtraction, and multiplication in one working precision, for example, double precision. Certain constants used in the algorithms are proved to be optimal.