A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning in the presence of malicious errors
SIAM Journal on Computing
Learning nested differences in the presence of malicious noise
Theoretical Computer Science - Special issue on algorithmic learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Support vector domain description
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Code-Red: a case study on the spread and victims of an internet worm
Proceedings of the 2nd ACM SIGCOMM Workshop on Internet measurment
Machine Learning
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Parzen-Window Network Intrusion Detectors
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 4 - Volume 4
Novelty detection: a review—part 1: statistical approaches
Signal Processing
Novelty detection: a review—part 2: neural network based approaches
Signal Processing
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
IEEE Security and Privacy
A Classification Framework for Anomaly Detection
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
MisleadingWorm Signature Generators Using Deliberate Noise Injection
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Hamsa: Fast Signature Generation for Zero-day PolymorphicWorms with Provable Attack Resilience
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Nightmare at test time: robust learning by feature deletion
ICML '06 Proceedings of the 23rd international conference on Machine learning
Evading network anomaly detection systems: formal reasoning and practical techniques
Proceedings of the 13th ACM conference on Computer and communications security
Consistency and Convergence Rates of One-Class SVMs and Related Algorithms
The Journal of Machine Learning Research
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
USENIX-SS'06 Proceedings of the 15th conference on USENIX Security Symposium - Volume 15
A System for the Analysis of Jet Engine Vibration Data
Integrated Computer-Aided Engineering
Intrusion detection using sequences of system calls
Journal of Computer Security
On the infeasibility of modeling polymorphic shellcode
Proceedings of the 14th ACM conference on Computer and communications security
Exploiting machine learning to subvert your spam filter
LEET'08 Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats
Learning to classify with missing and corrupted features
Proceedings of the 25th international conference on Machine learning
Linear-Time Computation of Similarity Measures for Sequential Data
The Journal of Machine Learning Research
Casting out Demons: Sanitizing Training Data for Anomaly Sensors
SP '08 Proceedings of the 2008 IEEE Symposium on Security and Privacy
Learning and Classification of Malware Behavior
DIMVA '08 Proceedings of the 5th international conference on Detection of Intrusions and Malware, and Vulnerability Assessment
On Relevant Dimensions in Kernel Feature Spaces
The Journal of Machine Learning Research
Open problems in the security of learning
Proceedings of the 1st ACM workshop on Workshop on AISec
Learning from dependent observations
Journal of Multivariate Analysis
A framework for quantitative security analysis of machine learning
Proceedings of the 2nd ACM workshop on Security and artificial intelligence
Adaptive Anomaly Detection via Self-calibration and Dynamic Updating
RAID '09 Proceedings of the 12th International Symposium on Recent Advances in Intrusion Detection
Stability Bounds for Stationary φ-mixing and β-mixing Processes
The Journal of Machine Learning Research
Automated classification and analysis of internet malware
RAID'07 Proceedings of the 10th international conference on Recent advances in intrusion detection
The security of machine learning
Machine Learning
A sense of self for Unix processes
SP'96 Proceedings of the 1996 IEEE conference on Security and privacy
Detecting unknown network attacks using language models
DIMVA'06 Proceedings of the Third international conference on Detection of Intrusions and Malware & Vulnerability Assessment
Anomalous payload-based worm detection and signature generation
RAID'05 Proceedings of the 8th international conference on Recent Advances in Intrusion Detection
Paragraph: thwarting signature learning by training maliciously
RAID'06 Proceedings of the 9th international conference on Recent Advances in Intrusion Detection
Anagram: a content anomaly detector resistant to mimicry attack
RAID'06 Proceedings of the 9th international conference on Recent Advances in Intrusion Detection
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method--online centroid anomaly detection--in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.