Explaining collaborative filtering recommendations
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
Is seeing believing?: how recommender system interfaces affect users' opinions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Explanation in Recommender Systems
Artificial Intelligence Review
Effective explanations of recommendations: user-centered design
Proceedings of the 2007 ACM conference on Recommender systems
Explanations of recommendations
Proceedings of the 2007 ACM conference on Recommender systems
UM '07 Proceedings of the 11th international conference on User Modeling
Automatically assessing review helpfulness
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
An empirical study of the influence of user tailoring on evaluative argument effectiveness
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Layered evaluation of interactive adaptive systems: framework and formative methods
User Modeling and User-Adapted Interaction
Diversifying Product Review Rankings: Getting the Full Picture
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Recommender systems: from algorithms to user experience
User Modeling and User-Adapted Interaction
Evaluating the effectiveness of explanations for recommender systems
User Modeling and User-Adapted Interaction
A framework for learning and analyzing hybrid recommenders based on heterogeneous semantic data
Proceedings of the 10th Conference on Open Research Areas in Information Retrieval
How should I explain? A comparison of different explanation types for recommender systems
International Journal of Human-Computer Studies
Hi-index | 0.00 |
This paper studies the properties of a helpful and trustworthy explanation in a movie recommender system. It discusses the results of an experiment based on a natural language explanation prototype. The explanations were varied according to three factors: degree of personalization, polarity and expression of unknown movie features. Personalized explanations were not found to be significantly more Effective than non-personalized, or baseline explanations. Rather, explanations in all three conditions performed surprisingly well. We also found that participants evaluated the explanations themselves most highly in the personalized, feature-based condition.