Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their Development
IEEE Transactions on Software Engineering - Special issue on artificial intelligence and software engineering
Plans and situated actions: the problem of human-machine communication
Plans and situated actions: the problem of human-machine communication
The use of explicit user models in text generation: tailoring to a user's level of expertise
The use of explicit user models in text generation: tailoring to a user's level of expertise
Planning natural language utterances to satisfy multiple goals
Planning natural language utterances to satisfy multiple goals
Correcting object-related misconceptions (natural language)
Correcting object-related misconceptions (natural language)
ACL '85 Proceedings of the 23rd annual meeting on Association for Computational Linguistics
Language As a Cognitive Process: Syntax
Language As a Cognitive Process: Syntax
Explanation facilities and interactive systems
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
Generating explanations in context
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
The repair of speech act misunderstandings by abductive inference
Computational Linguistics
Explanations in Knowledge Systems: Design for Explainable Expert Systems
IEEE Expert: Intelligent Systems and Their Applications
Does conversation analysis have a role in computational linguistics?
Computational Linguistics
Planning text for advisory dialogues
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
Follow-up question handling in the imix and ritel systems: A comparative study
Natural Language Engineering
Pointing: a way toward explanation dialogue
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
Expert and advice-giving systems produce complex multi-sentential responses to user's queries. Results from analyses of novice/expert dialogues indicate that novices often do not understand an expert's response and rarely ask a well-formulated follow-up question. Thus systems must be able to provide further information in response to vaguely articulated questions. However, current systems cannot clarify misunderstood explanations or elaborate on previous explanations. In this paper we describe an approach to explanation generation that expands a system's explanatory capabilities and enables the production of clarifying or elaborating explanations in response to follow-up questions or indication that the explanation was not understood.