Adversarial machine learning

  • Authors:
  • Ling Huang;Anthony D. Joseph;Blaine Nelson;Benjamin I.P. Rubinstein;J. D. Tygar

  • Affiliations:
  • Intel Labs Berkeley, Berkeley, CA, USA;UC Berkeley, Berkeley, CA, USA;University of Tubingen, Tubingen, Germany;Microsoft, Mountain View, CA, USA;UC Berkeley, Berkeley, CA, USA

  • Venue:
  • Proceedings of the 4th ACM workshop on Security and artificial intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques.