Generation of fast interpreters for Huffman compressed bytecode

  • Authors:
  • Mario Latendresse;Marc Feeley

  • Affiliations:
  • Science and Technology Advancement Team, FNMOC/U.S. Navy, Monterey, CA;Département d'informatique et recherche opérationnelle, Université de Montréal, Montréal, Canada

  • Venue:
  • Science of Computer Programming - Special issue on advances in interpreters, virtual machines and emulators (IVME'03)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Embedded systems often have severe memory constraints requiring careful encoding of programs. For example, smart cards have on the order of 1K of RAM, 16K of non-volatile memory, and 24K of ROM. A virtual machine can be an effective approach to obtain compact programs but instructions are commonly encoded using one byte for the opcode and multiple bytes for the operands, which can be wasteful and thus limit the size of programs runnable on embedded systems. Our approach uses canonical Huffman codes to generate compact opcodes with custom-sized operand fields and with a virtual machine that directly executes this compact code. We present techniques to automatically generate the new instruction formats and the decoder. In effect, this automatically creates both an instruction set for a customized virtual machine and an implementation of that machine. We demonstrate that, without prior decompression, fast decoding of these virtual compressed instructions is feasible. Through experiments on Scheme and Java, we demonstrate the speed of these decoders. Java benchmarks show an average execution slowdown of 9%. The reductions in size highly depend on the original bytecode and the training samples, but typically vary from 40% to 60%.