Bagging classifiers for fighting poisoning attacks in adversarial classification tasks

  • Authors:
  • Battista Biggio;Igino Corona;Giorgio Fumera;Giorgio Giacinto;Fabio Roli

  • Affiliations:
  • Dept. of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy;Dept. of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy;Dept. of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy;Dept. of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy;Dept. of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy

  • Venue:
  • MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by "poisoning" its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.