Communications of the ACM
An algorithmic framework for performing collaborative filtering
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Getting to know you: learning new user preferences in recommender systems
Proceedings of the 7th international conference on Intelligent user interfaces
Methods and metrics for cold-start recommendations
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Towards more conversational and collaborative recommender systems
Proceedings of the 8th international conference on Intelligent user interfaces
MovieLens unplugged: experiences with an occasionally connected recommender system
Proceedings of the 8th international conference on Intelligent user interfaces
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
A Bayesian approach toward active learning for collaborative filtering
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
The influence limiter: provably manipulation-resistant recommender systems
Proceedings of the 2007 ACM conference on Recommender systems
EigenRank: a ranking-oriented approach to collaborative filtering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Personalized active learning for collaborative filtering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Introduction to Information Retrieval
Introduction to Information Retrieval
Factorization meets the neighborhood: a multifaceted collaborative filtering model
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Adaptive collaborative filtering
Proceedings of the 2008 ACM conference on Recommender systems
Learning preferences of new users in recommender systems: an information theoretic approach
ACM SIGKDD Explorations Newsletter
Interfaces for eliciting new user preferences in recommender systems
UM'03 Proceedings of the 9th international conference on User modeling
Assessing regret-based preference elicitation with the UTPREF recommendation system
Proceedings of the 11th ACM conference on Electronic commerce
Performance of recommender algorithms on top-n recommendation tasks
Proceedings of the fourth ACM conference on Recommender systems
Evaluating the dynamic properties of recommendation algorithms
Proceedings of the fourth ACM conference on Recommender systems
On bootstrapping recommender systems
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Adaptive bootstrapping of recommender systems using decision trees
Proceedings of the fourth ACM international conference on Web search and data mining
Recommender Systems Handbook
Recommender Systems: An Introduction
Recommender Systems: An Introduction
Functional matrix factorizations for cold-start recommendation
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Adaptive active learning in recommender systems
UMAP'11 Proceedings of the 19th international conference on User modeling, adaption, and personalization
Wisdom of the better few: cold start recommendation via representative based rating elicitation
Proceedings of the fifth ACM conference on Recommender systems
Active collaborative filtering
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Multiattribute bayesian preference elicitation with pairwise comparison queries
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Critiquing-based recommenders: survey and emerging trends
User Modeling and User-Adapted Interaction
Recommender systems: missing data and statistical model estimation
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Efficiently learning the preferences of people
Machine Learning
Hi-index | 0.00 |
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems.