Self-organisation for survival in complex computer architectures
SOAR'09 Proceedings of the First international conference on Self-organizing architectures
An abstraction-refinement approach to verification of artificial neural networks
CAV'10 Proceedings of the 22nd international conference on Computer Aided Verification
NeVer: a tool for artificial neural networks verification
Annals of Mathematics and Artificial Intelligence
Building safer robots: Safety driven control
International Journal of Robotics Research
Hi-index | 0.00 |
There are many performance based techniques that aim to improve the safety of neural networks for safety critical applications. However, many of these approaches provide inadequate forms of safety assurance required for certification. As a result, neural networks are typically restricted to advisory roles in safety-related applications. Neural networks have the ability to operate in unpredictable and changing environments. It is therefore desirable to certify them for highly-dependable roles in safety critical systems. This paper outlines the safety criteria which are safety requirements for the behaviour of neural networks. If enforced, the criteria can contribute to justifying the safety of ANN functional properties. Characteristics of potential neural network models are also outlined and are based upon representing knowledge in interpretable and understandable forms. The paper also presents a safety lifecycle for artificial neural networks. This lifecycle focuses on managing behaviour represented by neural networks and contributes to providing acceptable forms of safety assurance.