Finding Corrupted Computers Using Imperfect Intrusion Prevention System Event Data

  • Authors:
  • Danielle Chrun;Michel Cukier;Gerry Sneeringer

  • Affiliations:
  • Center for Risk and Reliability, University of Maryland, Maryland, 20742-7531;Center for Risk and Reliability, University of Maryland, Maryland, 20742-7531;Office of Information Technology, University of Maryland, Maryland 20742-7531

  • Venue:
  • SAFECOMP '08 Proceedings of the 27th international conference on Computer Safety, Reliability, and Security
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the increase of attacks on the Internet, a primary concern for organizations is how to protect their network. The objectives of a security team are 1) to prevent external attackers from launching successful attacks against organization computers that could become compromised, 2) to ensure that organization computers are not vulnerable (e.g., fully patched) so that in either case the organization computers do not start launching attacks. The security team can monitor and block malicious activity by using devices such as intrusion prevention systems. However, in large organizations, such monitoring devices could record a high number of events. The contributions of this paper are 1) to introduce a method that ranks potentially corrupted computers based on imperfect intrusion prevention system event data, and 2) to evaluate the method based on empirical data collected at a large organization of about 40,000 computers. The evaluation is based on the judgment of a security expert of which computers were indeed corrupted. On the one hand, we studied how many computers classified as of high concern or of concern were indeed corrupted (i.e., true positives). On the other hand, we analyzed how many computers classified as of lower concern were in fact corrupted (i.e., false negatives).