Can machine learning be secure?

  • Authors:
  • Marco Barreno;Blaine Nelson;Russell Sears;Anthony D. Joseph;J. D. Tygar

  • Affiliations:
  • University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;University of California, Berkeley

  • Venue:
  • ASIACCS '06 Proceedings of the 2006 ACM Symposium on Information, computer and communications security
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However, machine learning algorithms themselves can be a target of attack by a malicious adversary. This paper provides a framework for answering the question, "Can machine learning be secure?" Novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, a discussion of ideas that are important to security for machine learning, an analytical model giving a lower bound on attacker's work function, and a list of open problems.