Is seeing believing?: how recommender system interfaces affect users' opinions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Understanding choice overload in recommender systems
Proceedings of the fourth ACM conference on Recommender systems
Enhancing recommendation diversity with organization interfaces
Proceedings of the 16th international conference on Intelligent user interfaces
A pragmatic procedure to support the user-centric evaluation of recommender systems
Proceedings of the fifth ACM conference on Recommender systems
Explaining the user experience of recommender systems
User Modeling and User-Adapted Interaction
Hi-index | 0.00 |
Research on recommender systems evaluation generally measures the quality of the algorithm, or system, offline, i.e. based on some information retrieval metric, e.g. precision or recall. The metrics do however not always reflect the users' perceptions of the recommendations. Perception-related values are instead often measured through user studies, however the bulk of the work on recommender systems is evaluated through offline analysis. In the work presented in this paper we choose to neglect the quality of the recommender system and instead focus on the similarity of aspects related to users' perception of recommender systems. Based on a user study (N = 132) we show the correlation of concepts such as usefulness, ratings, obviousness, and serendipity from the users' perspectives.