Code red worm propagation modeling and analysis
Proceedings of the 9th ACM conference on Computer and communications security
Throttling Viruses: Restricting propagation to defeat malicious mobile code
ACSAC '02 Proceedings of the 18th Annual Computer Security Applications Conference
On the universality of rank distributions of website popularity
Computer Networks: The International Journal of Computer and Telecommunications Networking
Automated response using system-call delays
SSYM'00 Proceedings of the 9th conference on USENIX Security Symposium - Volume 9
A study in using neural networks for anomaly and misuse detection
SSYM'99 Proceedings of the 8th conference on USENIX Security Symposium - Volume 8
Pretty Good BGP: Improving BGP by Cautiously Adopting Routes
ICNP '06 Proceedings of the Proceedings of the 2006 IEEE International Conference on Network Protocols
The ghost in the browser analysis of web-based malware
HotBots'07 Proceedings of the first conference on First Workshop on Hot Topics in Understanding Botnets
You've been warned: an empirical study of the effectiveness of web browser phishing warnings
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
SS'08 Proceedings of the 17th conference on Security symposium
Power-Law Distributions in Empirical Data
SIAM Review
Detection and analysis of drive-by-download attacks and malicious JavaScript code
Proceedings of the 19th international conference on World wide web
Crying wolf: an empirical study of SSL warning effectiveness
SSYM'09 Proceedings of the 18th conference on USENIX security symposium
BLADE: an attack-agnostic approach for preventing drive-by malware infections
Proceedings of the 17th ACM conference on Computer and communications security
On the effects of registrar-level intervention
LEET'11 Proceedings of the 4th USENIX conference on Large-scale exploits and emergent threats
Does domain highlighting help people identify phishing sites?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Click Trajectories: End-to-End Analysis of the Spam Value Chain
SP '11 Proceedings of the 2011 IEEE Symposium on Security and Privacy
ZOZZLE: fast and precise in-browser JavaScript malware detection
SEC'11 Proceedings of the 20th USENIX conference on Security
deSEO: combating search-result poisoning
SEC'11 Proceedings of the 20th USENIX conference on Security
Fashion crimes: trending-term exploitation on the web
Proceedings of the 18th ACM conference on Computer and communications security
Do malware reports expedite cleanup? an experimental study
CSET'12 Proceedings of the 5th USENIX conference on Cyber Security Experimentation and Test
The most dangerous code in the world: validating SSL certificates in non-browser software
Proceedings of the 2012 ACM conference on Computer and communications security
Hi-index | 0.00 |
Malware spread among websites and between websites and clients is an increasing problem. Search engines play an important role in directing users to websites and are a natural control point for intervening using mechanisms such as blacklisting. The paper presents a simple Markov model of malware spread through large populations of websites and studies the effect of two interventions that might be deployed by a search provider: blacklisting infected web pages by removing them from search results entirely and a generalization of blacklisting, called depreferencing, in which a website's ranking is decreased by a fixed percentage each time period the site remains infected. We analyze and study the trade-offs between infection exposure and traffic loss due to false positives (the cost to a website that is incorrectly blacklisted) for different interventions. As expected, we find that interventions are most effective when websites are slow to remove infections. Surprisingly, we also find that low infection or recovery rates can increase traffic loss due to false positives. Our analysis also shows that heavy-tailed distributions of website popularity, as documented in many studies, leads to high sample variance of all measured outcomes. This result implies that it will be difficult to determine empirically whether certain website interventions are effective, and it suggests that theoretical models such as the one described in this paper have an important role to play in improving web security.