Learning in the presence of malicious errors
SIAM Journal on Computing
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Can machine learning be secure?
ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
MisleadingWorm Signature Generators Using Deliberate Noise Injection
SP '06 Proceedings of the 2006 IEEE Symposium on Security and Privacy
Evading network anomaly detection systems: formal reasoning and practical techniques
Proceedings of the 13th ACM conference on Computer and communications security
On the infeasibility of modeling polymorphic shellcode
Proceedings of the 14th ACM conference on Computer and communications security
Linear-Time Computation of Similarity Measures for Sequential Data
The Journal of Machine Learning Research
Proceedings of the 4th ACM workshop on Security and artificial intelligence
Understanding the risk factors of learning in adversarial environments
Proceedings of the 4th ACM workshop on Security and artificial intelligence
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
Journal of Computer Security
Hi-index | 0.00 |
We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.