Machine learning in adversarial environments
Machine Learning
Classifier evasion: models and open problems
PSDML'10 Proceedings of the international ECML/PKDD conference on Privacy and security issues in data mining and machine learning
A unifying view on dataset shift in classification
Pattern Recognition
Bagging classifiers for fighting poisoning attacks in adversarial classification tasks
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Understanding the risk factors of learning in adversarial environments
Proceedings of the 4th ACM workshop on Security and artificial intelligence
Static detection of malicious JavaScript-bearing PDF documents
Proceedings of the 27th Annual Computer Security Applications Conference
Bio-inspired enhancement of reputation systems for intelligent environments
Information Sciences: an International Journal
Sampling attack against active learning in adversarial environment
MDAI'12 Proceedings of the 9th international conference on Modeling Decisions for Artificial Intelligence
Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues
Information Sciences: an International Journal
An agent-based model to simulate and analyse behaviour under noisy and deceptive information
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
Journal of Computer Security
Hi-index | 0.00 |
Machine learning's ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.