SOAR: an architecture for general intelligence
Artificial Intelligence
Why CSCW applications fail: problems in the design and evaluationof organizational interfaces
CSCW '88 Proceedings of the 1988 ACM conference on Computer-supported cooperative work
Design rationale: the argument behind the artifact
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Explanation and interaction: the computer generation of explanatory dialogues
Explanation and interaction: the computer generation of explanatory dialogues
Generating explanations in context
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
Unified theories of cognition
Design patterns: elements of reusable object-oriented software
Design patterns: elements of reusable object-oriented software
Agents that learn to explain themselves
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Measuring the value of knowledge
International Journal of Human-Computer Studies
The Turing effect: the nature of trust in expert systems advice
Expertise in context
Explaining collaborative filtering recommendations
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Providing adaptive support to the understanding of instructional material
Proceedings of the 6th international conference on Intelligent user interfaces
Explanations in Knowledge Systems: the Role of Explicit Representation of Design Knowledge
IEEE Expert: Intelligent Systems and Their Applications
Explanations in Knowledge Systems: Design for Explainable Expert Systems
IEEE Expert: Intelligent Systems and Their Applications
Systems That Know What They're Doing
IEEE Intelligent Systems
Presenting Significant Information in Expert System Explanation
EPIA '95 Proceedings of the 7th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Implementing Explanation Ontology for Agent System
WI '03 Proceedings of the 2003 IEEE/WIC International Conference on Web Intelligence
Planning text for advisory dialogues
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Object-Oriented Software Engineering: Using UML, Patterns and Java, Second Edition
Object-Oriented Software Engineering: Using UML, Patterns and Java, Second Edition
CAST: collaborative agents for simulating teamwork
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Explaining answers from the Semantic Web: the Inference Web approach
Web Semantics: Science, Services and Agents on the World Wide Web
Validating a High Level Behavioral Representation Language (HERBAL): A Docking Study for ACT-R
Proceedings of the 2010 conference on Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society
International Journal of Intelligent Engineering Informatics
Dimensions of Concern: A Method to Use Cognitive Dimensions to Evaluate Interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Hi-index | 0.00 |
Explanation is an important capability for usable intelligent systems, including intelligent agents and cognitive models embedded within simulations and other decision support systems. Explanation facilities help users understand how and why an intelligent system possesses a given structure and set of behaviors. Prior research has resulted in a number of approaches to provide explanation capabilities and identified some significant challenges. We describe designs that can be reused to create intelligent agents capable of explaining themselves. The designs include ways to provide ontological, mechanistic, and operational explanations. These designs inscribe lessons learned from prior research and provide guidance for incorporating explanation facilities into intelligent systems. The designs are derived from both prior research on explanation tool design and from the empirical study reported here on the questions users ask when working with an intelligent system. We demonstrate the use of these designs through examples implemented using the Herbal high-level cognitive modeling language. These designs can help build better agents-they support creating more usable and more affordable intelligent agents by encapsulating prior knowledge about how to generate explanations in concise representations that can be instantiated or adapted by agent developers.