An algorithmic framework for performing collaborative filtering
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Getting to know you: learning new user preferences in recommender systems
Proceedings of the 7th international conference on Intelligent user interfaces
Is seeing believing?: how recommender system interfaces affect users' opinions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Transactions on Knowledge and Data Engineering
Learning preferences of new users in recommender systems: an information theoretic approach
ACM SIGKDD Explorations Newsletter
Distribution of cognitive load in Web search
Journal of the American Society for Information Science and Technology
Active collaborative filtering
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
An economic model of user rating in an online recommender system
UM'05 Proceedings of the 10th international conference on User Modeling
The Tag Genome: Encoding Community Knowledge to Support Novel Interaction
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Common Sense for Interactive Systems
Proceedings of the sixth ACM conference on Recommender systems
Recommendation system for automatic design of magazine covers
Proceedings of the 2013 international conference on Intelligent user interfaces
Rating support interfaces to improve user experience and recommender accuracy
Proceedings of the 7th ACM conference on Recommender systems
Journal of Systems Architecture: the EUROMICRO Journal
Recommending additional study materials: binary ratings vis-à-vis five-star ratings
BCS-HCI '13 Proceedings of the 27th International BCS Human Computer Interaction Conference
Hi-index | 0.00 |
Netflix.com uses star ratings, Digg.com uses up/down votes and Facebook uses a "like" but not a "dislike" button. Despite the popularity and diversity of these rating scales, research offers little guidance for designers choosing between them. This paper compares four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider. Our analysis draws upon 12,847 movie and product review ratings collected from 348 users through an online survey. We a) measure the time and cognitive load required by each scale, b) study how rating time varies with the rating value assigned by a user, and c) survey users' satisfaction with each scale. Overall, users work harder with more granular rating scales, but these effects are moderated by item domain (product reviews or movies). Given a particular scale, users rating times vary significantly for items they like and dislike. Our findings about users' rating effort and satisfaction suggest guidelines for designers choosing between rating scales.