Developing artificial neural networks for safety critical systems

  • Authors:
  • Zeshan Kurd;Tim Kelly;Jim Austin

  • Affiliations:
  • Department of Computer Science, University of York, YO10 5DD, York, UK;Department of Computer Science, University of York, YO10 5DD, York, UK;Department of Computer Science, University of York, YO10 5DD, York, UK

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are many performance based techniques that aim to improve the safety of neural networks for safety critical applications. However, many of these approaches provide inadequate forms of safety assurance required for certification. As a result, neural networks are typically restricted to advisory roles in safety-related applications. Neural networks have the ability to operate in unpredictable and changing environments. It is therefore desirable to certify them for highly-dependable roles in safety critical systems. This paper outlines the safety criteria which are safety requirements for the behaviour of neural networks. If enforced, the criteria can contribute to justifying the safety of ANN functional properties. Characteristics of potential neural network models are also outlined and are based upon representing knowledge in interpretable and understandable forms. The paper also presents a safety lifecycle for artificial neural networks. This lifecycle focuses on managing behaviour represented by neural networks and contributes to providing acceptable forms of safety assurance.