The best way to instil confidence is by being right

  • Authors:
  • Conor Nugent;Pádraig Cunningham;Dónal Doyle

  • Affiliations:
  • Department of Computer Science, Trinity College Dublin;Department of Computer Science, Trinity College Dublin;Department of Computer Science, Trinity College Dublin

  • Venue:
  • ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Instilling confidence in the abilities of machine learning systems in end-users is seen as critical to their success in real world problems. One way in which this can be achieved is by providing users with interpretable explanations of the system's predictions. CBR systems have long been understood to have an inherent transparency that has particular advantages for explanations compared with other machine learning techniques. However simply supplying the most similar case is often not enough. In this paper we present a framework for providing interpretable explanations of CBR systems which includes dynamically created discursive texts explaining the feature-value relationships and a measure of confidence of the CBR system's prediction being correct. We also present a means by which the trade-off between being overly confident or overly cautious can be evaluated and different methods compared. We have carried out a preliminary user evaluation of the framework and present our findings. It is clear from this evaluation that being right is important. It appears that caveats and notes of caution when the system is uncertain damage user confidence.