Reducing the trusted computing base for applications on commodity systems

  • Authors:
  • Adrian Perrig;Michael K. Reiter;Jonathan M. Mccune

  • Affiliations:
  • Carnegie Mellon University;Carnegie Mellon University;Carnegie Mellon University

  • Venue:
  • Reducing the trusted computing base for applications on commodity systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

Today we have powerful, feature-rich computer systems plagued by powerful, feature-rich malware. Current malware exploit the vulnerabilities that are endemic to the huge computing base that needs to be trusted to secure our private information. This thesis presents an architecture called Flicker that alleviates security-conscious developers from the burden of making sense out of this code base, allowing them to concentrate on the security of their own code. Since today's legacy operating systems will likely be used for the foreseeable future, we design Flicker to coexist with these systems. Flicker allows code to execute in complete isolation from other software while trusting as few as 250 lines of additional code—orders of magnitude smaller than even minimalist virtual machine monitors. Flicker also enables more meaningful attestation of the code executed and its inputs and outputs than previous proposals, since only measurements of the security-sensitive portions of an application need to be included. Flicker leverages hardware support provided by commodity processors from AMD and Intel that are widely available today, and does not require a new OS or a VMM. Flicker's properties hold even if the BIOS, OS and DMA-enabled devices are all malicious. We evaluate a full implementation of Flicker on an AMD system and apply Flicker to four server-side applications. We also perform a detailed case study of the use of Flicker to reduce the trusted computing base to which users' input events are exposed on their own computers, circumventing entire classes of malware such as keyloggers and screen scrapers. This case study involves the development of a system called Bumpy that allows the user to specify strings of input as sensitive when she enters them, and ensures that these inputs reach the desired endpoint in a protected state. The inputs are processed in a Flicker-isolated code module on the user's system, where they can be encrypted or otherwise processed for a remote webserver. A trusted mobile device can provide feedback to the user that her inputs are bound for the intended destination. We describe the design, implementation, and evaluation of Bumpy, with emphasis on both usability and security issues. Flicker depends on attestations composed of cryptographic hashes and digital signatures to allow a remote verifier to ascertain the identity of code that executes with Flicker's protections. We propose a mechanism called Seeing-is-Believing to allow the computer's owner to authenticate the physical identity of her computer, in addition to its digital identity represented in the attestation. This rules out the possibility of successful man-in-the-middle or proxy attacks, and reduces the need for trusted third parties that are unavailable today. Attestation technologies potentially pose a risk to users' privacy. Flicker protects users' privacy by including only the code executed during a Flicker session in an attestation, instead of providing information about all software loaded for execution during the current boot cycle. Motivated by our experience with Flicker on today's hardware, we offer suggestions to improve Flicker's performance that leverage existing processor technology, retain security, and improve performance.