Towards the profiling of scientific software for accuracy

  • Authors:
  • Nicholas Jie Meng;Diane Kelly;Thomas R. Dean

  • Affiliations:
  • Queen's University, Canada;Royal Military College, Canada;Queen's University, Canada

  • Venue:
  • Proceedings of the 2011 Conference of the Center for Advanced Studies on Collaborative Research
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

For scientific computational software, accuracy is a constant concern. While existing tools and techniques can estimate the output accuracy, they do not attempt to locate where these errors come from and which parts of the code are most responsible for their amplification. In the related problem of software performance optimization, the Pareto principle, also known as the 80/20 rule, is used to great effect. Because the performance of software is typically dependent on only a few critical sections of code, efforts in optimization can be focused on locating these sections with the help of a profiler and then optimizing only the functions that will have the greatest effect on overall performance. Does the Pareto principle also apply in the case of software accuracy? To study this problem, we develop a novel approach for determining accuracy degradation at the function level using a combination of interval analysis and derivative techniques. We use the model to analyze a piece of scientific computational software from the field of nuclear engineering. Our results suggest that the Pareto principle does in fact apply for accuracy degradation: 88% of the analyzed functions had less than 2% average relative errors in their output, and error amplification only occurred on 19% of functions. These results imply that tools focused on locating the critical sections of code where accuracy degradation is high could be useful in helping scientific developers understand and improve the accuracy characteristics of their software.