Classifier evasion: models and open problems

  • Authors:
  • Blaine Nelson;Benjamin I. P. Rubinstein;Ling Huang;Anthony D. Joseph;J. D. Tygar

  • Affiliations:
  • UC Berkeley;Microsoft Research;Intel Labs Berkeley;UC Berkeley and Intel Labs Berkeley;UC Berkeley

  • Venue:
  • PSDML'10 Proceedings of the international ECML/PKDD conference on Privacy and security issues in data mining and machine learning
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

As a growing number of software developers apply machine learning to make key decisions in their systems, adversaries are adapting and launching ever more sophisticated attacks against these systems. The near-optimal evasion problem considers an adversary that searches for a low-cost negative instance by submitting a minimal number of queries to a classifier, in order to effectively evade the classifier. In this position paper, we posit several open problems and alternative variants to the near-optimal evasion problem. Solutions to these problems would significantly advance the state-of-the-art in secure machine learning.