Monte Carlo arithmetic: how to gamble with floating point and win
Computing in Science and Engineering
Proceedings of the 40th annual Design Automation Conference
ICSE '81 Proceedings of the 5th international conference on Software engineering
Gprof: A call graph execution profiler
SIGPLAN '82 Proceedings of the 1982 SIGPLAN symposium on Compiler construction
Assisted verification of elementary functions using Gappa
Proceedings of the 2006 ACM symposium on Applied computing
MPFR: A multiple-precision binary floating-point library with correct rounding
ACM Transactions on Mathematical Software (TOMS)
The pitfalls of verifying floating-point computations
ACM Transactions on Programming Languages and Systems (TOPLAS)
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Numerical Recipes 3rd Edition: The Art of Scientific Computing
TXL - A Language for Programming Language Tools and Applications
Electronic Notes in Theoretical Computer Science (ENTCS)
Software Engineering for Scientists
Computing in Science and Engineering
ESOP'05 Proceedings of the 14th European conference on Programming Languages and Systems
Hi-index | 0.00 |
For scientific computational software, accuracy is a constant concern. While existing tools and techniques can estimate the output accuracy, they do not attempt to locate where these errors come from and which parts of the code are most responsible for their amplification. In the related problem of software performance optimization, the Pareto principle, also known as the 80/20 rule, is used to great effect. Because the performance of software is typically dependent on only a few critical sections of code, efforts in optimization can be focused on locating these sections with the help of a profiler and then optimizing only the functions that will have the greatest effect on overall performance. Does the Pareto principle also apply in the case of software accuracy? To study this problem, we develop a novel approach for determining accuracy degradation at the function level using a combination of interval analysis and derivative techniques. We use the model to analyze a piece of scientific computational software from the field of nuclear engineering. Our results suggest that the Pareto principle does in fact apply for accuracy degradation: 88% of the analyzed functions had less than 2% average relative errors in their output, and error amplification only occurred on 19% of functions. These results imply that tools focused on locating the critical sections of code where accuracy degradation is high could be useful in helping scientific developers understand and improve the accuracy characteristics of their software.