Evaluating numerical ODE/DAE methods, algorithms and software

  • Authors:
  • Gustaf Söderlind;Lina Wang

  • Affiliations:
  • Numerical Analysis, Centre for Mathematical Sciences, Lund University, Box 118, SE-221 00 Lund, Sweden;Numerical Analysis, Centre for Mathematical Sciences, Lund University, Box 118, SE-221 00 Lund, Sweden

  • Venue:
  • Journal of Computational and Applied Mathematics - Special issue: International workshop on the technological aspects of mathematics
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false. We argue that ODE/DAE software can be constructed and analyzed by proven, ''standard'' scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.