Sampling attack against active learning in adversarial environment

  • Authors:
  • Wentao Zhao;Jun Long;Jianping Yin;Zhiping Cai;Geming Xia

  • Affiliations:
  • National University of Defense Technology, Changsha, Hunan, China;National University of Defense Technology, Changsha, Hunan, China;National University of Defense Technology, Changsha, Hunan, China;National University of Defense Technology, Changsha, Hunan, China;National University of Defense Technology, Changsha, Hunan, China

  • Venue:
  • MDAI'12 Proceedings of the 9th international conference on Modeling Decisions for Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Active learning has played an important role in many areas because it can reduce human efforts by just selecting most informative instances for training. Nevertheless, active learning is vulnerable in adversarial environments, including intrusion detection or spam filtering. The purpose of this paper was to reveal how active learning can be attacked in such environments. In this paper, three contributions were made: first, we analyzed the sampling vulnerability of active learning; second, we presented a game framework of attack against active learning; third, two sampling attack methods were proposed, including the adding attack and the deleting attack. Experimental results showed that the two proposed sampling attacks degraded sampling efficiency of naive-bayes active learner.