Learning in the presence of malicious errors
SIAM Journal on Computing
A robust minimax approach to classification
The Journal of Machine Learning Research
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Polygraph: Automatically Generating Signatures for Polymorphic Worms
SP '05 Proceedings of the 2005 IEEE Symposium on Security and Privacy
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
MisleadingWorm Signature Generators Using Deliberate Noise Injection
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Hamsa: Fast Signature Generation for Zero-day PolymorphicWorms with Provable Attack Resilience
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Nightmare at test time: robust learning by feature deletion
ICML '06 Proceedings of the 23rd international conference on Machine learning
Evading network anomaly detection systems: formal reasoning and practical techniques
Proceedings of the 13th ACM conference on Computer and communications security
Learning to classify with missing and corrupted features
Proceedings of the 25th international conference on Machine learning
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Mining adversarial patterns via regularized loss minimization
Machine Learning
Learning to classify with missing and corrupted features
Machine Learning
Classifier evaluation and attribute selection against active adversaries
Data Mining and Knowledge Discovery
Stackelberg games for adversarial prediction problems
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Anagram: a content anomaly detector resistant to mimicry attack
RAID'06 Proceedings of the 9th international conference on Recent Advances in Intrusion Detection
Hi-index | 0.00 |
Many learning tasks such as spam filtering and credit card fraud detection face an active adversary that tries to avoid detection. For learning problems that deal with an active adversary, it is important to model the adversary's attack strategy and develop robust learning models to mitigate the attack. These are the two objectives of this paper. We consider two attack models: a free-range attack model that permits arbitrary data corruption and a restrained attack model that anticipates more realistic attacks that a reasonable adversary would devise under penalties. We then develop optimal SVM learning strategies against the two attack models. The learning algorithms minimize the hinge loss while assuming the adversary is modifying data to maximize the loss. Experiments are performed on both artificial and real data sets. We demonstrate that optimal solutions may be overly pessimistic when the actual attacks are much weaker than expected. More important, we demonstrate that it is possible to develop a much more resilient SVM learning model while making loose assumptions on the data corruption models. When derived under the restrained attack model, our optimal SVM learning strategy provides more robust overall performance under a wide range of attack parameters.