Autonomous recovery from hostile code insertion using distributed reflection

  • Authors:
  • Catriona M Kennedy;Aaron Sloman

  • Affiliations:
  • School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK;School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK

  • Venue:
  • Cognitive Systems Research
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a hostile environment, an autonomous cognitive system requires a reflective capability to detect problems in its own operation and recover from them without external intervention. We present an architecture in which reflection is distributed so that components mutually observe and protect each other, and where the system has a distributed model of all its components, including those concerned with the reflection itself. Some reflective (or 'meta-level') components enable the system to monitor its execution traces and detect anomalies by comparing them with a model of normal activity. Other components monitor 'quality' of performance in the application domain. Implementation in a simple virtual world shows that the system can recover from certain kinds of hostile code attacks that cause it to make wrong decisions in its application domain, even if some of its self-monitoring components are also disabled.