Mining the peanut gallery: opinion extraction and semantic classification of product reviews
WWW '03 Proceedings of the 12th international conference on World Wide Web
Mining and summarizing customer reviews
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Self-selection, slipping, salvaging, slacking, and stoning: the impacts of negative feedback at eBay
Proceedings of the 6th ACM conference on Electronic commerce
Proceedings of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems
Thumbs up?: sentiment classification using machine learning techniques
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Extracting product features and opinions from reviews
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Understanding user behavior in online feedback reporting
Proceedings of the 8th ACM conference on Electronic commerce
Comparative experiments on sentiment classification for online product reviews
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Rating aggregation in collaborative filtering systems
Proceedings of the third ACM conference on Recommender systems
Exploiting web reviews for generating customer service surveys
SMUC '10 Proceedings of the 2nd international workshop on Search and mining user-generated contents
CRM: An efficient trust and reputation model for agent computing
Knowledge-Based Systems
Electronic Commerce Research and Applications
Hi-index | 0.00 |
Online reviews have become increasingly popular as a way to judge the quality of various products and services. However, recent work demonstrates that the absence of reporting incentives leads to a biased set of reviews that may not reflect the true quality. In this paper, we investigate underlying factors that influence users when reporting feedback. In particular, we study both reporting incentives and reporting biases observed in a widely used review forum, the Tripadvisor Web site. We consider three sources of information: first, the numerical ratings left by the user for different aspects of quality; second, the textual comment accompanying a review; third, the patterns in the time sequence of reports. We first show that groups of users who discuss a certain feature at length are more likely to agree in their ratings. Second, we show that users are more motivated to give feedback when they perceive a greater risk involved in a transaction. Third, a user's rating partly reflects the difference between true quality and prior expectation of quality, as inferred from previous reviews. We finally observe that because of these biases, when averaging review scores there are strong differences between the mean and the median. We speculate that the median may be a better way to summarize the ratings.