Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior
Proceedings of the 2nd ACM conference on Electronic commerce
Is seeing believing?: how recommender system interfaces affect users' opinions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Slash(dot) and burn: distributed moderation in a large online conversation space
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shilling recommender systems for fun and profit
Proceedings of the 13th international conference on World Wide Web
Collaborative recommendation: A robustness analysis
ACM Transactions on Internet Technology (TOIT)
Partitioning of Web graphs by community topology
WWW '05 Proceedings of the 14th international conference on World Wide Web
Identifying MMORPG bots: a traffic analysis approach
Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology
Detecting bad-mouthing attacks on reputation systems using self-organizing maps
CISIS'11 Proceedings of the 4th international conference on Computational intelligence in security for information systems
d.tour: style-based exploration of design example galleries
Proceedings of the 24th annual ACM symposium on User interface software and technology
Profanity use in online communities
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Automatic identification of personal insults on social news sites
Journal of the American Society for Information Science and Technology
Hi-index | 0.00 |
For current Web 2.0 services, manual examination of user uploaded content is normally required to ensure its legitimacy and appropriateness, which is a substantial burden to service providers. To reduce labor costs and the delays caused by content censoring, social moderation has been proposed as a front-line mechanism, whereby user moderators are encouraged to examine content before system moderation is required. Given the immerse amount of new content added to the Web each day, there is a need for automation schemes to facilitate rear system moderation. This kind of mechanism is expected to automatically summarize reports from user moderators and ban misbehaving users or remove inappropriate content whenever possible. However, the accuracy of such schemes may be reduced by collusion attacks, where some work together to mislead the automatic summarization in order to obtain shared benefits. In this paper, we propose a collusion-resistant automation scheme for social moderation systems. Because some user moderators may collude and dishonestly claim that a user misbehaves, our scheme detects whether an accusation from a user moderator is fair or malicious based on the structure of mutual accusations of all users in the system. Through simulations we show that collusion attacks are likely to succeed if an intuitive count-based automation scheme is used. The proposed scheme, which is based on the community structure of the user accusation graph, achieves a decent performance in most scenarios.