Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development
IEEE Transactions on Software Engineering - Special issue on artificial intelligence and software engineering
Reflections on NoteCards: seven issues for the next generation of hypermedia systems
Communications of the ACM
Responding to :20HUH?”: answering vaguely articulated follow-up questions
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shadow: Fusing Hypertext with AI
IEEE Expert: Intelligent Systems and Their Applications
A reactive approach to explanation in expert and advice-giving systems
A reactive approach to explanation in expert and advice-giving systems
Planning text for advisory dialogues
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
A reactive approach to explanation
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 2
An improved interface for tutorial dialogues: browsing a visual dialogue history
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Segmented interaction history in a collaborative interface agent
Proceedings of the 2nd international conference on Intelligent user interfaces
Explanations in Knowledge Systems: the Role of Explicit Representation of Design Knowledge
IEEE Expert: Intelligent Systems and Their Applications
Hi-index | 0.00 |
Explanation requires a dialogue. Users must be allowed to ask questions about previously given explanations. However, building an interface that allows users to ask follow-up questions poses a difficult challenge for natural language understanding because such questions often intermix meta-level references to the discourse with object-level references to the domain. We propose a hypertext-like interface that allows users to point to the portion of the system's explanation they would like clarified. By allowing users to point, many of the difficult referential problems in natural language analysis can be avoided. However, the feasibility of such an interface rests on the system's ability to understand what the user is pointing at; i.e., the system must understand its own explanations. To solve this problem, we employ a planning approach to explanation generation which records the design process that produced an explanation so that it can be used in later reasoning. In this paper, we show how synergy arises from combining a "pointing-style" interface with a text planning generation system, making explanation dialogues more feasible.