Arithmetic coding for data compression
Communications of the ACM
Efficient and language-independent mobile programs
PLDI '96 Proceedings of the ACM SIGPLAN 1996 conference on Programming language design and implementation
Garbage collection: algorithms for automatic dynamic memory management
Garbage collection: algorithms for automatic dynamic memory management
Data compression for PC software distribution
Software—Practice & Experience
Proceedings of the ACM SIGPLAN 1997 conference on Programming language design and implementation
Communications of the ACM
Computer architecture (2nd ed.): a quantitative approach
Computer architecture (2nd ed.): a quantitative approach
The Java programming language (2nd ed.)
The Java programming language (2nd ed.)
Automatic inference of models for statistical code compression
Proceedings of the ACM SIGPLAN 1999 conference on Programming language design and implementation
Proceedings of the ACM SIGPLAN 1999 conference on Programming language design and implementation
Next century challenges: mobile networking for “Smart Dust”
MobiCom '99 Proceedings of the 5th annual ACM/IEEE international conference on Mobile computing and networking
ARM System Architecture
On the Complexity of Finite Sequences
IEEE Transactions on Information Theory
Compression of individual sequences via variable-rate coding
IEEE Transactions on Information Theory
Bytecode compression via profiled grammar rewriting
Proceedings of the ACM SIGPLAN 2001 conference on Programming language design and implementation
Profile-guided code compression
PLDI '02 Proceedings of the ACM SIGPLAN 2002 Conference on Programming language design and implementation
Scalable Certification for Typed Assembly Language
TIC '00 Selected papers from the Third International Workshop on Types in Compilation
Code optimization for code compression
Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization
Generation of fast interpreters for Huffman compressed bytecode
Proceedings of the 2003 workshop on Interpreters, virtual machines and emulators
Cold code decompression at runtime
Communications of the ACM - Program compaction
PPMexe: PPM for Compressing Software
DCC '02 Proceedings of the Data Compression Conference
Compressing XML with Multiplexed Hierarchical PPM Models
DCC '01 Proceedings of the Data Compression Conference
Reducing program image size by extracting frozen code and data
Proceedings of the 4th ACM international conference on Embedded software
ACM Transactions on Architecture and Code Optimization (TACO)
An instruction for direct interpretation of LZ77-compressed programs
Software—Practice & Experience
A software-only compression system for trading-offs between performance and code size
SCOPES '05 Proceedings of the 2005 workshop on Software and compilers for embedded systems
Adaptive object code compression
CASES '06 Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems
ACM Transactions on Programming Languages and Systems (TOPLAS)
Generation of fast interpreters for Huffman compressed bytecode
Science of Computer Programming - Special issue on advances in interpreters, virtual machines and emulators (IVME'03)
Curl: a language for web content
International Journal of Web Engineering and Technology
Access pattern-based code compression for memory-constrained systems
ACM Transactions on Design Automation of Electronic Systems (TODAES)
JSZap: compressing JavaScript code
WebApps'10 Proceedings of the 2010 USENIX conference on Web application development
Hi-index | 0.00 |
This paper describes split-stream dictionary (SSD) compression, a new technique for transforming programs into a compact, interpretable form. We define a compressed program as interpretable when it can be decompressed at basic-block granularity with reasonable efficiency. The granularity requirement enables interpreters or just-in-time (JIT) translators to decompress basic blocks incrementally during program execution. Our previous approach to interpretable compression, the Byte-coded RISC (BRISC) program format [1], achieved unprecedented decompression speed in excess of 5 megabytes per second on a 450MHz Pentium II while compressing benchmark programs to an average of three-fifths the size of their optimized x86 representation. SSD compression combines the key idea behind BRISC with new observations about instruction re-use frequencies to yield four advantages over BRISC and other competing techniques. First, SSD is simple, requiring only a few pages of code for an effective implementation. Second, SSD compresses programs more effectively than any interpretable program compression scheme known to us. For example, SSD compressed a set of programs including the spec95 benchmarks and Microsoft Word97 to less than half the size, on average, of their optimized x86 representation. Third, SSD exceeds BRISC's decompression and JIT translation rates by over 50%. Finally, SSD's two-phased approach to JIT translation enables a virtual machine to provide graceful degradation of program execution time in the face of increasing RAM constraints. For example, using SSD, we ran Word97 using a JIT-translation buffer one-third the size of Word97's optimized x86 code, yet incurred only 27% execution time overhead.