A personal news agent that talks, learns and explains
Proceedings of the third annual conference on Autonomous Agents
Explaining collaborative filtering recommendations
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Getting to know you: learning new user preferences in recommender systems
Proceedings of the 7th international conference on Intelligent user interfaces
The role of transparency in recommender systems
CHI '02 Extended Abstracts on Human Factors in Computing Systems
The evaluation of adaptive systems
Adaptive evolutionary information systems
Dynamic Document Delivery: Generating Natural Language Texts on Demand
DEXA '98 Proceedings of the 9th International Workshop on Database and Expert Systems Applications
Group Modeling: Selecting a Sequence of Television Items to Suit a Group of Viewers
User Modeling and User-Adapted Interaction
Experiments in dynamic critiquing
Proceedings of the 10th international conference on Intelligent user interfaces
Explanation in Recommender Systems
Artificial Intelligence Review
Generating Diverse Compound Critiques
Artificial Intelligence Review
Trust building with explanation interfaces
Proceedings of the 11th international conference on Intelligent user interfaces
Being accurate is not enough: how accuracy metrics have hurt recommender systems
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Making recommendations better: an analytic model for human-recommender interaction
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Hybrid critiquing-based recommender systems
Proceedings of the 12th international conference on Intelligent user interfaces
Open user profiles for adaptive news systems: help or harm?
Proceedings of the 16th international conference on World Wide Web
Trust-inspiring explanation interfaces for recommender systems
Knowledge-Based Systems
Effective explanations of recommendations: user-centered design
Proceedings of the 2007 ACM conference on Recommender systems
The Effectiveness of Personalized Movie Explanations: An Experiment Using Commercial Meta-data
AH '08 Proceedings of the 5th international conference on Adaptive Hypermedia and Adaptive Web-Based Systems
The effects of transparency on trust in and acceptance of a content-based art recommender
User Modeling and User-Adapted Interaction
Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs
Journal of Management Information Systems
Tagsplanations: explaining recommendations using tags
Proceedings of the 14th international conference on Intelligent user interfaces
Do you know?: recommending people to invite into your social network
Proceedings of the 14th international conference on Intelligent user interfaces
A Survey of Explanations in Recommender Systems
ICDEW '07 Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop
A personalized system for conversational recommendations
Journal of Artificial Intelligence Research
Personalized recommendation of social software items based on social relations
Proceedings of the third ACM conference on Recommender systems
An empirical study of the influence of user tailoring on evaluative argument effectiveness
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Knowledge-Based Systems
Layered evaluation of interactive adaptive systems: framework and formative methods
User Modeling and User-Adapted Interaction
Recommender Systems Handbook
Evaluating recommender systems from the user's perspective: survey of the state of the art
User Modeling and User-Adapted Interaction
User Modeling and User-Adapted Interaction
Recommender systems: from algorithms to user experience
User Modeling and User-Adapted Interaction
From death to final disposition: roles of technology in the post-mortem interval
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Explaining the user experience of recommender systems
User Modeling and User-Adapted Interaction
Inspectability and control in social recommenders
Proceedings of the sixth ACM conference on Recommender systems
MinkApp: generating spatio-temporal summaries for nature conservation volunteers
INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
Algorithms to Resolve Conflict in Multiuser Context Aware Ubiquitous Environment
International Journal of Advanced Pervasive and Ubiquitous Computing
Being confident about the quality of the predictions in recommender systems
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Demo: making plans scrutable with argumentation and natural language generation
Proceedings of the companion publication of the 19th international conference on Intelligent User Interfaces
How should I explain? A comparison of different explanation types for recommender systems
International Journal of Human-Computer Studies
Hi-index | 0.00 |
When recommender systems present items, these can be accompanied by explanatory information. Such explanations can serve seven aims: effectiveness, satisfaction, transparency, scrutability, trust, persuasiveness, and efficiency. These aims can be incompatible, so any evaluation needs to state which aim is being investigated and use appropriate metrics. This paper focuses particularly on effectiveness (helping users to make good decisions) and its trade-off with satisfaction. It provides an overview of existing work on evaluating effectiveness and the metrics used. It also highlights the limitations of the existing effectiveness metrics, in particular the effects of under- and overestimation and recommendation domain. In addition to this methodological contribution, the paper presents four empirical studies in two domains: movies and cameras. These studies investigate the impact of personalizing simple feature-based explanations on effectiveness and satisfaction. Both approximated and real effectiveness is investigated. Contrary to expectation, personalization was detrimental to effectiveness, though it may improve user satisfaction. The studies also highlighted the importance of considering opt-out rates and the underlying rating distribution when evaluating effectiveness.