A critical investigation of recall and precision as measures of retrieval system performance
ACM Transactions on Information Systems (TOIS)
Item-based collaborative filtering recommendation algorithms
Proceedings of the 10th international conference on World Wide Web
Novelty and redundancy detection in adaptive filtering
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
User response to two algorithms as a test of collaborative filtering
CHI '01 Extended Abstracts on Human Factors in Computing Systems
On the use of the singular value decomposition for text retrieval
Computational information retrieval
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Item-based top-N recommendation algorithms
ACM Transactions on Information Systems (TOIS)
Improving recommendation lists through topic diversification
WWW '05 Proceedings of the 14th international conference on World Wide Web
IEEE Transactions on Knowledge and Data Engineering
Trust building with explanation interfaces
Proceedings of the 11th international conference on Intelligent user interfaces
Being accurate is not enough: how accuracy metrics have hurt recommender systems
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Improving Recommendation Novelty Based on Topic Taxonomy
WI-IATW '07 Proceedings of the 2007 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops
Evaluating product search and recommender systems for E-commerce environments
Electronic Commerce Research
Factorization meets the neighborhood: a multifaceted collaborative filtering model
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
A cross-cultural user evaluation of product recommender interfaces
Proceedings of the 2008 ACM conference on Recommender systems
A new approach to evaluating novel recommendations
Proceedings of the 2008 ACM conference on Recommender systems
Scalable Collaborative Filtering Approaches for Large Recommender Systems
The Journal of Machine Learning Research
Acceptance issues of personality-based recommender systems
Proceedings of the third ACM conference on Recommender systems
Analysis of cold-start recommendations in IPTV systems
Proceedings of the third ACM conference on Recommender systems
Critiquing recommenders for public taste products
Proceedings of the third ACM conference on Recommender systems
Performance of recommender algorithms on top-n recommendation tasks
Proceedings of the fourth ACM conference on Recommender systems
A user-centric evaluation framework for recommender systems
Proceedings of the fifth ACM conference on Recommender systems
ACM Transactions on Interactive Intelligent Systems (TiiS)
PB-ADVISOR: A private banking multi-investment portfolio advisor
Information Sciences: an International Journal
User profiling vs. accuracy in recommender system user experience
Proceedings of the International Working Conference on Advanced Visual Interfaces
User effort vs. accuracy in rating-based elicitation
Proceedings of the sixth ACM conference on Recommender systems
Beyond rating prediction accuracy: on new perspectives in recommender systems
Proceedings of the 7th ACM conference on Recommender systems
Proceedings of the 7th ACM international conference on Web search and data mining
Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols
User Modeling and User-Adapted Interaction
Hi-index | 0.00 |
A number of researches in the Recommender Systems (RSs) domain suggest that the recommendations that are "best" according to objective metrics are sometimes not the ones that are most satisfactory or useful to the users. The paper investigates the quality of RSs from a user-centric perspective. We discuss an empirical study that involved 210 users and considered seven RSs on the same dataset that use different baseline and state-of-the-art recommendation algorithms. We measured the user's perceived quality of each of them, focusing on accuracy and novelty of recommended items, and on overall users' satisfaction. We ranked the considered recommenders with respect to these attributes, and compared these results against measures of statistical quality of the considered algorithms as they have been assessed by past studies in the field using information retrieval and machine learning algorithms.