Stopping Rules for the Operational Testing of Safety-Critical Software

  • Authors:
  • Bev Littlewood;David Wright

  • Affiliations:
  • -;-

  • Venue:
  • FTCS '95 Proceedings of the Twenty-Fifth International Symposium on Fault-Tolerant Computing
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

It has been proposed to conduct a test of a software safety system for a nuclear reactor by subjecting it to demands that are statistically representative of those it will meet in operational use. The intention behind the test is to acquire a high confidence (99%) that the probability of failure on demand is smaller than 1 3. To this end the test takes the form of executing about 5000 demands and requiring that all of these are successful. In practice it is necessary to consider what happens if the software fails the test and is repaired. We argue that the earlier failure information needs to be taken into account in devising the form of the test that the modified software will need to pass - essentially that after such failure the testing requirement might need to be more stringent (i.e. the number of tests that must be executed failure-free should increase). We examine a Bayesian approach to the problem, for this stopping rule based upon a required bound for the probability of failure on demand, as above, and also for a requirement based upon a prediction of future failure behaviour. We show that the first approach seems to be less conservative than the second, and argue that the second should be preferred for practical application.