Approaches to adversarial drift

  • Authors:
  • Alex Kantchelian;Sadia Afroz;Ling Huang;Aylin Caliskan Islam;Brad Miller;Michael Carl Tschantz;Rachel Greenstadt;Anthony D. Joseph;J. D. Tygar

  • Affiliations:
  • University of California at Berkeley, Berkeley, CA, USA;University of Drexel, Philadelphia, PA, USA;Intel Labs, Berkeley, CA, USA;University of Drexel, Philadelphia, PA, USA;University of California at Berkeley, Berkeley, CA, USA;University of California at Berkeley, Berkeley, CA, USA;University of Drexel, Philadephia, PA, USA;University of California at Berkeley, Berkeley, CA, USA;University of California at Berkeley, Berkeley, CA, USA

  • Venue:
  • Proceedings of the 2013 ACM workshop on Artificial intelligence and security
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this position paper, we argue that to be of practical interest, a machine-learning based security system must engage with the human operators beyond feature engineering and instance labeling to address the challenge of drift in adversarial environments. We propose that designers of such systems broaden the classification goal into an explanatory goal, which would deepen the interaction with system's operators. To provide guidance, we advocate for an approach based on maintaining one classifier for each class of unwanted activity to be filtered. We also emphasize the necessity for the system to be responsive to the operators constant curation of the training set. We show how this paradigm provides a property we call isolation and how it relates to classical causative attacks. In order to demonstrate the effects of drift on a binary classification task, we also report on two experiments using a previously unpublished malware data set where each instance is timestamped according to when it was seen.