Machine Learning
A statistical approach to the spam problem
Linux Journal
Machine Learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
MisleadingWorm Signature Generators Using Deliberate Noise Injection
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
A Theoretical Analysis of Bagging as a Linear Combination of Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Exploiting machine learning to subvert your spam filter
LEET'08 Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats
Ensembles of One Class Support Vector Machines
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
ANTIDOTE: understanding and defending against poisoning of anomaly detectors
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference
Advanced allergy attacks: does a corpus really help
RAID'07 Proceedings of the 10th international conference on Recent advances in intrusion detection
HMM-web: a framework for the detection of attacks against web applications
ICC'09 Proceedings of the 2009 IEEE international conference on Communications
Machine learning in adversarial environments
Machine Learning
The security of machine learning
Machine Learning
Outlier resistant PCA ensembles
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Weighted bagging for graph based one-class classifiers
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues
Information Sciences: an International Journal
Approaches to adversarial drift
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
Hi-index | 0.00 |
Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by "poisoning" its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.