Defining operationality for explanation-based learning
Artificial Intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Creativity and learning in a case-based explainer
Artificial Intelligence
Developing creative hypotheses by adapting explanations
Developing creative hypotheses by adapting explanations
Evaluating Explanations: A Content Theory
Evaluating Explanations: A Content Theory
A General Explanation-Based Learning Mechanism and Its Application to Narrative Understanding
A General Explanation-Based Learning Mechanism and Its Application to Narrative Understanding
Explanation Patterns: Understanding Mechanical and Creatively
Explanation Patterns: Understanding Mechanical and Creatively
Cases, context, and comfort: opportunities for case-based reasoning in smart homes
Designing Smart Homes
Hi-index | 0.00 |
Many abductive understanding systems explain novel situations by a chaining process that is neutral to explainer needs beyond generating some plausible explanation for the event being explained. This paper examines the relationship of standard models of abductive understanding to the case-based explanation model. In case-based explanation, construction and selection of abductive hypotheses are focused by specific explanations of prior episodes and by goal-based criteria reflecting current information needs. The case-based method is inspired by observations of human explanation of anomalous events during everyday understanding, and this paper focuses on the method's contributions to the problems of building good explanations in everyday domains. We identify five central issues, compare how those issues are addressed in traditional and case-based explanation models, and discuss motivations for using the case-based approach to facilitate generation of plausible and useful explanations in domains that are complex and imperfectly understood.