Solving convex programs by random walks
Journal of the ACM (JACM)
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Prediction, Learning, and Games
Prediction, Learning, and Games
The security of machine learning
Machine Learning
Multiple classifier systems under attack
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
Proceedings of the 4th ACM workshop on Security and artificial intelligence
Query strategies for evading convex-inducing classifiers
The Journal of Machine Learning Research
Early security classification of skype users via machine learning
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
Hi-index | 0.00 |
As a growing number of software developers apply machine learning to make key decisions in their systems, adversaries are adapting and launching ever more sophisticated attacks against these systems. The near-optimal evasion problem considers an adversary that searches for a low-cost negative instance by submitting a minimal number of queries to a classifier, in order to effectively evade the classifier. In this position paper, we posit several open problems and alternative variants to the near-optimal evasion problem. Solutions to these problems would significantly advance the state-of-the-art in secure machine learning.