Revised report on the algorithmic language scheme
ACM SIGPLAN Notices
How to print floating-point numbers accurately
PLDI '90 Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
The art of computer programming, volume 2 (3rd ed.): seminumerical algorithms
Communications of the ACM
27 bits are not enough for 8-digit accuracy
Communications of the ACM
How to print floating-point numbers accurately
PLDI '90 Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation
Pragmatic parsing in Common Lisp; or, putting defmacro on steroids
ACM SIGPLAN Lisp Pointers
The design of floating-point data types
ACM Letters on Programming Languages and Systems (LOPLAS)
HOPL-II The second ACM SIGPLAN conference on History of programming languages
Compiler transformations for high-performance computing
ACM Computing Surveys (CSUR)
Printing floating-point numbers quickly and accurately
PLDI '96 Proceedings of the ACM SIGPLAN 1996 conference on Programming language design and implementation
Revised report on the algorithmic language scheme
ACM SIGPLAN Lisp Pointers
Revised Report on the Algorithmic Language Scheme
Higher-Order and Symbolic Computation
How to read floating point numbers accurately
ACM SIGPLAN Notices - Best of PLDI 1979-1999
How to print floating-point numbers accurately
ACM SIGPLAN Notices - Best of PLDI 1979-1999
Systematic IEEE rounding method for high-speed floating-point multipliers
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
History of programming languages---II
Formal Verification of the VAMP Floating Point Unit
Formal Methods in System Design
Fast decimal floating-point division
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
MPFR: A multiple-precision binary floating-point library with correct rounding
ACM Transactions on Mathematical Software (TOMS)
The pitfalls of verifying floating-point computations
ACM Transactions on Programming Languages and Systems (TOPLAS)
Revised6 report on the algorithmic language scheme
Journal of Functional Programming
IBM Journal of Research and Development
Hi-index | 0.00 |
Consider the problem of converting decimal scientific notation for a number into the best binary floating point approximation to that number, for some fixed precision. This problem cannot be solved using arithmetic of any fixed precision. Hence the IEEE Standard for Binary Floating-Point Arithmetic does not require the result of such a conversion to be the best approximation.This paper presents an efficient algorithm that always finds the best approximation. The algorithm uses a few extra bits of precision to compute an IEEE-conforming approximation while testing an intermediate result to determine whether the approximation could be other than the best. If the approximation might not be the best, then the best approximation is determined by a few simple operations on multiple-precision integers, where the precision is determined by the input. When using 64 bits of precision to compute IEEE double precision results, the algorithm avoids higher-precision arithmetic over 99% of the time.The input problem considered by this paper is the inverse of an output problem considered by Steele and White: Given a binary floating point number, print a correctly rounded decimal representation of it using the smallest number of digits that will allow the number to be read without loss of accuracy. The Steele and White algorithm assumes that the input problem is solved; an imperfect solution to the input problem, as allowed by the IEEE standard and ubiquitous in current practice, defeats the purpose of their algorithm.