SOAR: an architecture for general intelligence
Artificial Intelligence
Explanation in second generation expert systems
Second generation expert systems
SHOP: Simple Hierarchical Ordered Planner
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Fight, flight, or negotiate: believable strategies for conversing under crisis
Lecture Notes in Computer Science
An explainable artificial intelligence system for small-unit tactical behavior
IAAI'04 Proceedings of the 16th conference on Innovative applications of artifical intelligence
Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules
RuleML '09 Proceedings of the 2009 International Symposium on Rule Interchange and Applications
Table-Top Gaming Narratology for Digital Interactive Storytelling
ICIDS '09 Proceedings of the 2nd Joint International Conference on Interactive Digital Storytelling: Interactive Storytelling
Affective negotiation support systems
Journal of Ambient Intelligence and Smart Environments
Do you get it? user-evaluated explainable BDI agents
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
Optimizing story-based learning: an investigation of student narrative profiles
ITS'10 Proceedings of the 10th international conference on Intelligent Tutoring Systems - Volume Part II
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
Reflective tutoring for immersive simulation
ITS'06 Proceedings of the 8th international conference on Intelligent Tutoring Systems
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on highlights of the decade in interactive intelligent systems
Hi-index | 0.00 |
As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems are not modular and not portable; they are tied to a particular AI system. In this paper, we present a modular and generic architecture for explaining the behavior of simulated entities. We describe its application to the Virtual Humans, a simulation designed to teach soft skills such as negotiation and cultural awareness.