Learning in the presence of malicious errors
SIAM Journal on Computing
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
A statistical approach to the spam problem
Linux Journal
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Novelty detection: a review—part 1: statistical approaches
Signal Processing
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Diagnosing network-wide traffic anomalies
Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications
Polygraph: Automatically Generating Signatures for Polymorphic Worms
SP '05 Proceedings of the 2005 IEEE Symposium on Security and Privacy
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
Hamsa: Fast Signature Generation for Zero-day PolymorphicWorms with Provable Attack Resilience
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Prediction, Learning, and Games
Prediction, Learning, and Games
Evading network anomaly detection systems: formal reasoning and practical techniques
Proceedings of the 13th ACM conference on Computer and communications security
An inquiry into the nature and causes of the wealth of internet miscreants
Proceedings of the 14th ACM conference on Computer and communications security
Exploiting machine learning to subvert your spam filter
LEET'08 Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats
Undermining an anomaly-based intrusion detection system using common exploits
RAID'02 Proceedings of the 5th international conference on Recent advances in intrusion detection
Advanced allergy attacks: does a corpus really help
RAID'07 Proceedings of the 10th international conference on Recent advances in intrusion detection
Allergy attack against automatic signature generation
RAID'06 Proceedings of the 9th international conference on Recent Advances in Intrusion Detection
Proceedings of the 4th ACM workshop on Security and artificial intelligence
On the Value of Coordination in Distributed Self-Adaptation of Intrusion Detection System
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 02
Network-theoretic classification of parallel computation patterns
International Journal of High Performance Computing Applications
Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues
Information Sciences: an International Journal
An agent-based model to simulate and analyse behaviour under noisy and deceptive information
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
Hi-index | 0.00 |
Machine learning has become a valuable tool for detecting and preventing malicious activity. However, as more applications employ machine learning techniques in adversarial decision-making situations, increasingly powerful attacks become possible against machine learning systems. In this paper, we present three broad research directions towards the end of developing truly secure learning. First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system. Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has. Finally, we propose directions in technologies for secure learning and suggest lines of investigation into secure techniques for learning in adversarial environments. We intend this paper to foster discussion about the security of machine learning, and we believe that the research directions we propose represent the most important directions to pursue in the quest for secure learning.