Understanding the risk factors of learning in adversarial environments

  • Authors:
  • Blaine Nelson;Battista Biggio;Pavel Laskov

  • Affiliations:
  • University of Tübingen, Tübingen, Germany;Department of Electrical and Electronic Engineering, University of Cagliari, Italy;University of Tübingen, Tübingen, Germany

  • Venue:
  • Proceedings of the 4th ACM workshop on Security and artificial intelligence
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.