Beyond Output Voting: Detecting Compromised Replicas Using HMM-Based Behavioral Distance

  • Authors:
  • Debin Gao;Michael K. Reiter;Dawn Song

  • Affiliations:
  • Singapore Management University, Singapore;University of North Carolina at Chapel Hill, Chapel Hill;University of California, Berkeley, Berkeley

  • Venue:
  • IEEE Transactions on Dependable and Secure Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many host-based anomaly detection techniques have been proposed to detect code-injection attacks on servers. The vast majority, however, are susceptible to "mimicry” attacks in which the injected code masquerades as the original server software, including returning the correct service responses, while conducting its attack. "Behavioral distance,” by which two diverse replicas processing the same inputs are continually monitored to detect divergence in their low-level (system-call) behaviors and hence potentially the compromise of one of them, has been proposed for detecting mimicry attacks. In this paper, we present a novel approach to behavioral distance measurement using a new type of Hidden Markov Model, and present an architecture realizing this new approach. We evaluate the detection capability of this approach using synthetic workloads and recorded workloads of production web and game servers, and show that it detects intrusions with substantially greater accuracy than a prior proposal on measuring behavioral distance. We also detail the design and implementation of a new architecture, which takes advantage of virtualization to measure behavioral distance. We apply our architecture to implement intrusion-tolerant web and game servers, and through trace-driven simulations demonstrate that it experiences moderate performance costs even when thresholds are set to detect stealthy mimicry attacks.