Detecting spam web pages through content analysis
Proceedings of the 15th international conference on World Wide Web
Detecting online commercial intention (OCI)
Proceedings of the 15th international conference on World Wide Web
A content-driven reputation system for the wikipedia
Proceedings of the 16th international conference on World Wide Web
Spam double-funnel: connecting web spammers with advertisers
Proceedings of the 16th international conference on World Wide Web
Fighting Spam on Social Web Sites: A Survey of Approaches and Future Challenges
IEEE Internet Computing
Creating, destroying, and restoring value in wikipedia
Proceedings of the 2007 international ACM conference on Supporting group work
Spamalytics: an empirical analysis of spam marketing conversion
Proceedings of the 15th ACM conference on Computer and communications security
SS'08 Proceedings of the 17th conference on Security symposium
The work of sustaining order in wikipedia: the banning of a vandal
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Readers are not free-riders: reading as a form of participation on wikipedia
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Detecting Wikipedia vandalism via spatio-temporal analysis of revision metadata?
Proceedings of the Third European Workshop on System Security
Automatic vandalism detection in Wikipedia
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
What did they do? Deriving high-level edit histories in Wikis
Proceedings of the 6th International Symposium on Wikis and Open Collaboration
On the potential of proactive domain blacklisting
LEET'10 Proceedings of the 3rd USENIX conference on Large-scale exploits and emergent threats: botnets, spyware, worms, and more
Detecting and characterizing social spam campaigns
Proceedings of the 17th ACM conference on Computer and communications security
Proliferation and Detection of Blog Spam
IEEE Security and Privacy
Re: CAPTCHAs: understanding CAPTCHA-solving services in an economic context
USENIX Security'10 Proceedings of the 19th USENIX conference on Security
Wikipedia vandalism detection: combining natural language, metadata, and reputation features
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part II
The nuts and bolts of a forum spam automator
LEET'11 Proceedings of the 4th USENIX conference on Large-scale exploits and emergent threats
What Wikipedia deletes: characterizing dangerous collaborative content
Proceedings of the 7th International Symposium on Wikis and Open Collaboration
Autonomous link spam detection in purely collaborative environments
Proceedings of the 7th International Symposium on Wikis and Open Collaboration
What Wikipedia deletes: characterizing dangerous collaborative content
Proceedings of the 7th International Symposium on Wikis and Open Collaboration
Autonomous link spam detection in purely collaborative environments
Proceedings of the 7th International Symposium on Wikis and Open Collaboration
Spamming for science: active measurement in web 2.0 abuse research
FC'12 Proceedings of the 16th international conference on Financial Cryptography and Data Security
The consensus game: modeling peer decision protocols
Proceedings of the Eighth Annual International Symposium on Wikis and Open Collaboration
Hi-index | 0.00 |
Collaborative functionality is an increasingly prevalent web technology. To encourage participation, these systems usually have low barriers-to-entry and permissive privileges. Unsurprisingly, ill-intentioned users try to leverage these characteristics for nefarious purposes. In this work, a particular abuse is examined -- link spamming -- the addition of promotional or otherwise inappropriate hyperlinks. Our analysis focuses on the wiki model and the collaborative encyclopedia, Wikipedia, in particular. A principal goal of spammers is to maximize exposure, the quantity of people who view a link. Creating and analyzing the first Wikipedia link spam corpus, we find that existing spam strategies perform quite poorly in this regard. The status quo spamming model relies on link persistence to accumulate exposures, a strategy that fails given the diligence of the Wikipedia community. Instead, we propose a model that exploits the latency inherent in human anti-spam enforcement. Statistical estimation suggests our novel model would produce significantly more link exposures than status quo techniques. More critically, the strategy could prove economically viable for perpetrators, incentivizing its exploitation. To this end, we address mitigation strategies.