Getting to know you: learning new user preferences in recommender systems
Proceedings of the 7th international conference on Intelligent user interfaces
IEEE Transactions on Knowledge and Data Engineering
Being accurate is not enough: how accuracy metrics have hurt recommender systems
CHI '06 Extended Abstracts on Human Factors in Computing Systems
A hybrid approach for improving predictive accuracy of collaborative filtering algorithms
User Modeling and User-Adapted Interaction
Enhancing privacy and preserving accuracy of a distributed collaborative filtering
Proceedings of the 2007 ACM conference on Recommender systems
Crafting the initial user experience to achieve community goals
Proceedings of the 2008 ACM conference on Recommender systems
An Evaluation Methodology for Collaborative Recommender Systems
AXMEDIS '08 Proceedings of the 2008 International Conference on Automated solutions for Cross Media Content and Multi-channel Distribution
A comparative user study on rating vs. personality quiz based preference elicitation methods
Proceedings of the 14th international conference on Intelligent user interfaces
Learning preferences of new users in recommender systems: an information theoretic approach
ACM SIGKDD Explorations Newsletter
Interaction design guidelines on critiquing-based recommender systems
User Modeling and User-Adapted Interaction
Interfaces for eliciting new user preferences in recommender systems
UM'03 Proceedings of the 9th international conference on User modeling
Performance of recommender algorithms on top-n recommendation tasks
Proceedings of the fourth ACM conference on Recommender systems
On bootstrapping recommender systems
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Comparative evaluation of recommender system quality
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Looking for "good" recommendations: a comparative evaluation of recommender systems
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part III
Wisdom of the better few: cold start recommendation via representative based rating elicitation
Proceedings of the fifth ACM conference on Recommender systems
A user-centric evaluation framework for recommender systems
Proceedings of the fifth ACM conference on Recommender systems
Enhancing collaborative filtering systems with personality information
Proceedings of the fifth ACM conference on Recommender systems
Precision-oriented evaluation of recommender systems: an algorithmic comparison
Proceedings of the fifth ACM conference on Recommender systems
Recommender systems: from algorithms to user experience
User Modeling and User-Adapted Interaction
ACM Transactions on Interactive Intelligent Systems (TiiS)
Evaluating recommender systems from the user's perspective: survey of the state of the art
User Modeling and User-Adapted Interaction
Explaining the user experience of recommender systems
User Modeling and User-Adapted Interaction
User Modeling and User-Adapted Interaction
TV program detection in tweets
Proceedings of the 11th european conference on Interactive TV and video
Hi-index | 0.00 |
One of the unresolved issues when designing a recommender system is the number of ratings -- i.e., the profile length -- that should be collected from a new user before providing recommendations. A design tension exists, induced by two conflicting requirements. On the one hand, the system must collect "enough"ratings from the user in order to learn her/his preferences and improve the accuracy of recommendations. On the other hand, gathering more ratings adds a burden on the user, which may negatively affect the user experience. Our research investigates the effects of profile length from both a subjective (user-centric) point of view and an objective (accuracy-based) perspective. We carried on an offline simulation with three algorithms, and a set of online experiments involving overall 960 users and four recommender algorithms, to measure which of the two contrasting forces influenced by the number of collected ratings -- recommendations relevance and burden of the rating process -- has stronger effects on the perceived quality of the user experience. Moreover, our study identifies the potentially optimal profile length for an explicit, rating based, and human controlled elicitation strategy.