Allocating architected registers through differential encoding

  • Authors:
  • Xiaotong Zhuang;Santosh Pande

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, GA;Georgia Institute of Technology, Atlanta, GA

  • Venue:
  • ACM Transactions on Programming Languages and Systems (TOPLAS)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Micro-architecture designers are very cautious about expanding the number of architected and exposed registers in the instruction set because increasing the register field adds to the code size, raises the I-cache and memory pressure, and may complicate the processor pipeline. Especially for low-end processors, encoding space could be extremely limited due to area and power considerations. On the other hand, the number of architected registers exposed to the compiler could directly affect the effectiveness of compiler analysis and optimization. For high-performance computers, register pressure can be higher than the available registers in some regions. This could be due to optimizations like aggressive function inlining, software pipelining, etc. The compiler cannot effectively perform compilation and optimization if only a small number of registers are exposed through the ISA. Therefore, it is crucial that more architected registers are available at the compiler's disposal, without expanding the code size significantly. In this article, we devise a new register encoding scheme, called differential encoding, that allows more registers to be addressed in the operand field of instructions than the direct encoding currently being used. We show that this can be implemented with very low overhead. Based upon differential encoding, we apply it in several ways such that the extra architected registers can benefit the performance. Three schemes are devised to integrate differential encoding with register allocation. We demonstrate that differential register allocation is helpful in improving the performance of both high-end and low-end processors. Moreover, we can combine it with software pipelining to provide more registers and reduce spills. Our results show that differential encoding significantly reduces the number of spills and speeds-up program execution. For a low-end configuration, we achieve over 14% speedup while keeping code size almost unaffected. For a high-end VLIW in-order machine, it can significantly speed-up loops with high register pressure (about 80% speedup) and the overall speedup is about 15%. Moreover, our scheme can be applied in an adaptive manner, making its overhead much smaller.