A Grid Based Diagnostics and Prognosis System for Rolls Royce Aero Engines: The DAME Project
CLADE '04 Proceedings of the 2nd International Workshop on Challenges of Large Applications in Distributed Environments
NCA '04 Proceedings of the Network Computing and Applications, Third IEEE International Symposium
Designing safe, profitable automated stock trading agents using evolutionary algorithms
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Self-Managed Systems: an Architectural Challenge
FOSE '07 2007 Future of Software Engineering
A multi-agent simulation system for prediction and scheduling of aero engine overhaul
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track
Safety Assurance Strategies for Autonomous Vehicles
SAFECOMP '08 Proceedings of the 27th international conference on Computer Safety, Reliability, and Security
Using quantitative analysis to implement autonomic IT systems
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
SAFECOMP'06 Proceedings of the 25th international conference on Computer Safety, Reliability, and Security
Using safety critical artificial neural networks in gas turbine aero-engine control
SAFECOMP'05 Proceedings of the 24th international conference on Computer Safety, Reliability, and Security
Justification of smart sensors for nuclear applications
SAFECOMP'05 Proceedings of the 24th international conference on Computer Safety, Reliability, and Security
Hi-index | 0.00 |
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.