Efficient techniques for comprehensive protection from memory error exploits

  • Authors:
  • Sandeep Bhatkar;R. Sekar;Daniel C. DuVarney

  • Affiliations:
  • Department of Computer Science, Stony Brook University, Stony Brook, NY;Department of Computer Science, Stony Brook University, Stony Brook, NY;Department of Computer Science, Stony Brook University, Stony Brook, NY

  • Venue:
  • SSYM'05 Proceedings of the 14th conference on USENIX Security Symposium - Volume 14
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite the wide publicity received by buffer overflow attacks, the vast majority of today's security vulnerabilities continue to be caused by memory errors, with a significant shift away from stack-smashing exploits to newer attacks such as heap overflows, integer overflows, and format-string attacks. While comprehensive solutions have been developed to handle memory errors, these solutions suffer from one or more of the following problems: high overheads (often exceeding 100%), incompatibility with legacy C code, and changes to the memory model to use garbage collection. Address space randomization (ASR) is a technique that avoids these drawbacks, but existing techniques for ASR do not offer a level of protection comparable to the above techniques. In particular, attacks that exploit relative distances between memory objects aren't tackled by existing techniques. Moreover, these techniques are susceptible to information leakage and brute-force attacks. To overcome these limitations, we develop a new approach in this paper that supports comprehensive randomization, whereby the absolute locations of all (code and data) objects, as well as their relative distances are randomized. We argue that this approach provides probabilistic protection against all memory error exploits, whether they be known or novel. Our approach is implemented as a fully automatic source-to-source transformation which is compatible with legacy C code. The address-space randomizations take place at load-time or runtime, so the same copy of the binaries can be distributed to everyone - this ensures compatibility with today's software distribution model. Experimental results demonstrate an average runtime overhead of about 11%.