Explanation in Recommender Systems
Artificial Intelligence Review
Being accurate is not enough: how accuracy metrics have hurt recommender systems
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Trust-inspiring explanation interfaces for recommender systems
Knowledge-Based Systems
A Survey of Explanations in Recommender Systems
ICDEW '07 Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop
QUICKXPLAIN: preferred explanations and relaxations for over-constrained problems
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
A general framework for explaining the results of a multi-attribute preference model
Artificial Intelligence
User-centric preference-based decision making
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Transparent provenance derivation for user decisions
IPAW'12 Proceedings of the 4th international conference on Provenance and Annotation of Data and Processes
Hi-index | 0.00 |
Many different forms of explanation have been proposed for justifying decisions made by automated systems. However, there is no consensus on what constitutes a good explanation, or what information these explanations should include. In this paper, we present the results of a study into how people justify their decisions. Analysis of our results allowed us to extract the forms of explanation adopted by users to justify choices, and the situations in which these forms are used. The analysis led to the development of guidelines and patterns for explanations to be generated by automated decision systems. This paper presents the study, its results, and the guidelines and patterns we derived.