Utility scoring of product reviews
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search
ACM Transactions on Information Systems (TOIS)
Proceedings of the 16th international conference on World Wide Web
Proceedings of the 2007 international ACM conference on Supporting group work
Finding high-quality content in social media
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Predictors of answer quality in online Q&A sites
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
WINE '08 Proceedings of the 4th International Workshop on Internet and Network Economics
Modeling and Predicting the Helpfulness of Online Reviews
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
How opinions are received by online communities: a case study on amazon.com helpfulness votes
Proceedings of the 18th international conference on World wide web
Automatically assessing review helpfulness
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Exploiting social context for review quality prediction
Proceedings of the 19th international conference on World wide web
How useful are your comments?: analyzing and predicting youtube comments and comment ratings
Proceedings of the 19th international conference on World wide web
Incentivizing high-quality user-generated content
Proceedings of the 20th international conference on World wide web
Hi-index | 0.00 |
When a website hosting user-generated content asks users a straightforward question - "Was this content helpful?" with one "Yes" and one "No" button as the two possible answers - one might expect to get a straightforward answer. In this paper, we explore how users respond to this question and find that their responses are not quite straightforward after all. Using data from Amazon product reviews, we present evidence that users do not make absolute, independent voting decisions based on individual review quality alone. Rather, whether users vote at all, as well as the polarity of their vote for any given review, depends on the context in which they view it - reviews receive a larger overall number of votes when they are 'misranked', and the polarity of votes becomes more positive/negative when the review is ranked lower/higher than it deserves. We distill these empirical findings into a new probabilistic model of rating behavior that includes the dependence of rating decisions on context. Understanding and formally modeling voting behavior is crucial for designing learning mechanisms and algorithms for review ranking, and we conjecture that many of our findings also apply to user behavior in other online content-rating settings.