Compiler optimizations using data compression to decrease address reference entropy

  • Authors:
  • H. G. Dietz;T. I. Mattox

  • Affiliations:
  • Electrical and Computer Engineering Department, University of Kentucky, Lexington, KY;Electrical and Computer Engineering Department, University of Kentucky, Lexington, KY

  • Venue:
  • LCPC'02 Proceedings of the 15th international conference on Languages and Compilers for Parallel Computing
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

In modern computers, a single “random” access to main memory often takes as much time as executing hundreds of instructions. Rather than using traditional compiler approaches to enhance locality by interchanging loops, reordering data structures, etc., this paper proposes the radical concept of using aggressive data compression technology to improve hierarchical memory performance by reducing memory address reference entropy. In some cases, conventional compression technology can be adapted. However, where variable access patterns must be permitted, other compression techniques must be used. For the special case of random access to elements of sparse matrices, data structures and compiler technology already exist. Our approach is much more general, using compressive hash functions to implement random access lookup tables. Techniques that can be used to improve the effectiveness of any compression method in reducing memory access entropy also are discussed.