Detecting spam web pages through content analysis
Proceedings of the 15th international conference on World Wide Web
Proceedings of the 16th international conference on World Wide Web
Comments-oriented blog summarization by sentence extraction
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
Exploiting redundancy in natural language to penetrate Bayesian spam filters
WOOT '07 Proceedings of the first USENIX workshop on Offensive Technologies
Ambient social tv: drawing people into a shared experience
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Leveraging the CAPTCHA problem
HIP'05 Proceedings of the Second international conference on Human Interactive Proofs
Comment spam detection by sequence mining
Proceedings of the fifth ACM international conference on Web search and data mining
Hi-index | 0.00 |
Social networks heavily rely on the concept of reputation. Some platforms implement formalized systems to express reputation, for example as a rating, but the concept is broader and very often the reputation of a user, the perceived quality of a product, the popularity of a TV show or any other subject of published information stems from a more informal collection of comments and recommendations. Thus, guaranteeing the authenticity of the published data has become very important, and various systems have been developed to deal with this problem. However, in this paper we are going to demonstrate that the most commonly adopted filtering techniques do not adequately protect the messaging platforms from the automated injection of comments. The adopted methodology is quite empirical, but nonetheless it allows to point out not only the existence of the vulnerability, but also to make some educated guess about the reasons behind the failure of the tested filters. In the conclusion, we trace a possible path leading to a more effective solution.