SOAR: an architecture for general intelligence
Artificial Intelligence
HTN planning: complexity and expressivity
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
The first law of robotics (a call to arms)
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
The GOMS family of user interface analysis techniques: comparison and contrast
ACM Transactions on Computer-Human Interaction (TOCHI)
Planning and Resource Allocation for Hard Real-time, Fault-Tolerant Plan Execution
Autonomous Agents and Multi-Agent Systems
The Vision of Autonomic Computing
Computer
Intrusion detection using sequences of system calls
Journal of Computer Security
Behavior bounding: toward effective comparisons of agents & humans
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Detecting and reacting to unplanned-for world states
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Behavior bounding: an efficient method for high-level behavior comparison
Journal of Artificial Intelligence Research
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Hi-index | 0.00 |
Developing and testing intelligent agents is a complex task that is both time-consuming and costly. This creates the potential that problems in the agent's behavior will be realized only after the agent has been put to use. As a result, society is left with a vexing problem: although we can create agents that seem capable of performing useful tasks autonomously, we are simultaneously unwilling to trust these agents because of the inherent incompleteness of testing. In this paper we present a framework that brings validation techniques out of the laboratory and uses them to monitor and constrain an agent's behavior concurrent with task execution. Applications of this framework extend well beyond helping to ensure safe agent behavior through run-time validation. They also include the ability to enforce social or environmental policies or to regulate the agent's autonomy.