Gaining insight through case-based explanation

  • Authors:
  • Conor Nugent;Dónal Doyle;Pádraig Cunningham

  • Affiliations:
  • 4C, University College Cork, Cork, Ireland;Idiro Technologies Dublin, Dublin, Ireland;Computer Science, University College Dublin, Dublin, Ireland

  • Venue:
  • Journal of Intelligent Information Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional explanation strategies in machine learning have been dominated by rule and decision tree based approaches. Case-based explanations represent an alternative approach which has inherent advantages in terms of transparency and user acceptability. Case-based explanations are based on a strategy of presenting similar past examples in support of and as justification for recommendations made. The traditional approach to such explanations, of simply supplying the nearest neighbour as an explanation, has been found to have shortcomings. Cases should be selected based on their utility in forming useful explanations. However, the relevance of the explanation case may not be clear to the end user as it is retrieved using domain knowledge which they themselves may not have. In this paper the focus is on a knowledge-light approach to case-based explanations that works by selecting cases based on explanation utility and offering insights into the effects of feature-value differences. In this paper we examine to two such a knowledge-light frameworks for case-based explanation. We look at explanation oriented retrieval (EOR) a strategy which explicitly models explanation utility and also at the knowledge-light explanation framework (KLEF) that uses local logistic regression to support case-based explanation.