Code compression

  • Authors:
  • Jens Ernst;William Evans;Christopher W. Fraser;Todd A. Proebsting;Steven Lucco

  • Affiliations:
  • University of Arizona, Dept of Computer Science, Gould Simpson Building, Tucson, AZ;University of Arizona, Dept of Computer Science, Gould Simpson Building, Tucson, AZ;Microsoft Research, One Microsoft Way, Redmond, WA;University of Arizona, Dept of Computer Science, Gould Simpson Building, Tucson, AZ;Microsoft, One Microsoft Way, Redmond, WA

  • Venue:
  • Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current research in compiler optimization counts mainly CPU time and perhaps the first cache level or two. This view has been important but is becoming myopic, at least from a system-wide viewpoint, as the ratio of network and disk speeds to CPU speeds grows exponentially.For example, we have seen the CPU idle for most of the time during paging, so compressing pages can increase total performance even though the CPU must decompress or interpret the page contents. Another profile shows that many functions are called just once, so reduced paging could pay for their interpretation overhead.This paper describes:• Measurements that show how code compression can save space and total time in some important real-world scenarios.• A compressed executable representation that is roughly the same size as gzipped x86 programs and can be interpreted without decompression. It can also be compiled to high-quality machine code at 2.5 megabytes per second on a 120MHz Pentium processor• A compressed "wire" representation that must be decompressed before execution but is, for example, roughly 21% the size of SPARC code when compressing gcc.