An integrated interface for proactive, experience-based design support
Proceedings of the 6th international conference on Intelligent user interfaces
Case-Based Reasoning: Experiences, Lessons and Future Directions
Case-Based Reasoning: Experiences, Lessons and Future Directions
Explanation in Case-Based Reasoning---Perspectives and Goals
Artificial Intelligence Review
A Case-Based Explanation System for Black-Box Systems
Artificial Intelligence Review
An evaluation of the usefulness of case-based explanation
ICCBR'03 Proceedings of the 5th international conference on Case-based reasoning: Research and Development
Gaining insight through case-based explanation
Journal of Intelligent Information Systems
Hi-index | 0.00 |
Instilling confidence in the abilities of machine learning systems in end-users is seen as critical to their success in real world problems. One way in which this can be achieved is by providing users with interpretable explanations of the system's predictions. CBR systems have long been understood to have an inherent transparency that has particular advantages for explanations compared with other machine learning techniques. However simply supplying the most similar case is often not enough. In this paper we present a framework for providing interpretable explanations of CBR systems which includes dynamically created discursive texts explaining the feature-value relationships and a measure of confidence of the CBR system's prediction being correct. We also present a means by which the trade-off between being overly confident or overly cautious can be evaluated and different methods compared. We have carried out a preliminary user evaluation of the framework and present our findings. It is clear from this evaluation that being right is important. It appears that caveats and notes of caution when the system is uncertain damage user confidence.