Artificial Intelligence Review - Special issue on lazy learning
Explanation in Case-Based Reasoning---Perspectives and Goals
Artificial Intelligence Review
An evaluation of the usefulness of case-based explanation
ICCBR'03 Proceedings of the 5th international conference on Case-based reasoning: Research and Development
Medical diagnosis with C4.5 rule preceded by artificial neural network ensemble
IEEE Transactions on Information Technology in Biomedicine
Explanation in Case-Based Reasoning---Perspectives and Goals
Artificial Intelligence Review
The Explanatory Power of Symbolic Similarity in Case-Based Reasoning
Artificial Intelligence Review
Artificial Intelligence Review
Why and why not explanations improve the intelligibility of context-aware intelligent systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaining insight through case-based explanation
Journal of Intelligent Information Systems
Explanation and trust: what to tell the user in security and AI?
Ethics and Information Technology
Generating estimates of classification confidence for a case-based spam filter
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
The best way to instil confidence is by being right
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
Generalised bottom-up pruning: A model level combination of decision trees
Expert Systems with Applications: An International Journal
Journal of Computer Security
Hi-index | 0.00 |
Most users of machine-learning products are reluctant to use them without any sense of the underlying logic that has led to the system's predictions. Unfortunately many of these systems lack any transparency in the way they operate and are deemed to be black boxes. In this paper we present a Case-Based Reasoning (CBR) solution to providing supporting explanations of black-box systems. This CBR solution has two key facets; it uses local information to assess the importance of each feature and using this, it selects the cases from the data used to build the black-box system for use in explanation. The retrieval mechanism takes advantage of the derived feature importance information to help select cases that are a better reflection of the black-box solution and thus more convincing explanations.