Static and Dynamic Program Compilation by Interpreter Specialization

  • Authors:
  • Scott Thibault;Charles Consel;Julia L. Lawall;Renaud Marlet;Gilles Muller

  • Affiliations:
  • COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France. thibault@gmvhdl.com;COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France. consel@irisa.fr;Computer Science Department, Boston University, 111 Cummington St., Boston, MA 02215, USA. jll@cs.bu.edu;COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France. marlet@irisa.fr;COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France. muller@irisa.fr

  • Venue:
  • Higher-Order and Symbolic Computation
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Interpretation and run-time compilation techniques are increasingly important because they can support heterogeneous architectures, evolving programming languages, and dynamically-loaded code. Interpretation is simple to implement, but yields poor performance. Run-time compilation yields better performance, but is costly to implement. One way to preserve simplicity but obtain good performance is to apply program specialization to an interpreter in order to generate an efficient implementation of the program automatically. Such specialization can be carried out at both compile time and run time.Recent advances in program-specialization technology have significantly improved the performance of specialized interpreters. This paper presents and assesses experiments applying program specialization to both bytecode and structured-language interpreters. The results show that for some general-purpose bytecode languages, specialization of an interpreter can yield speedups of up to a factor of four, while specializing certain structured-language interpreters can yield performance comparable to that of an implementation in a general-purpose language, compiled using an optimizing compiler.