A misleading attack against semi-supervised learning for intrusion detection

  • Authors:
  • Fangzhou Zhu;Jun Long;Wentao Zhao;Zhiping Cai

  • Affiliations:
  • Huazhong University of Science and Technology, Wuhan, China and National University of Defense Technology, Changsha, Hunan, China;Huazhong University of Science and Technology, Wuhan, China and National University of Defense Technology, Changsha, Hunan, China;Huazhong University of Science and Technology, Wuhan, China and National University of Defense Technology, Changsha, Hunan, China;Huazhong University of Science and Technology, Wuhan, China and National University of Defense Technology, Changsha, Hunan, China

  • Venue:
  • MDAI'10 Proceedings of the 7th international conference on Modeling decisions for artificial intelligence
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Machine learning has became a popular method for intrusion detection due to self-adaption for changing situation. Limited to lack of high quality labeled instances, some researchers focused on semi-supervised learning to utilize unlabeled instances enhancing classification. But involving the unlabeled instances into learning process also introduces vulnerability: attackers can generate fake unlabeled instances to mislead the final classifier so that a few intrusions can not be detected. We show how attackers can influence the semi-supervised classifier by constructing unlabeled instances in this paper. And a possible defence method which based on active learning is proposed. Experiments show that the misleading attack can reduce the accuracy of the semi-supervised learning method and the presented defense method against the misleading attack can obtain higher accuracy than the original semi-supervised learner under the proposed attack.