Data buffering: run-time versus compile-time support

  • Authors:
  • H. Mulder

  • Affiliations:
  • Section Computer Architecture and Digital Systems, Department of Electrical Engineering, Delft University of Technology, PO Box 5031, 2600 AG Delft, The Netherlands

  • Venue:
  • ASPLOS III Proceedings of the third international conference on Architectural support for programming languages and operating systems
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data-dependency, branch, and memory-access penalties are main constraints on the performance of high-speed microprocessors. The memory-access penalties concern both penalties imposed by external memory (e.g. cache) or by under utilization of the local processor memory (e.g. registers). This paper focuses solely on methods of increasing the utilization of data memory, local to the processor (registers or register-oriented buffers).A utilization increase of local processor memory is possible by means of compile-time software, run-time hardware, or a combination of both. This paper looks at data buffers which perform solely because of the compile-time software (single register sets); those which operate mainly through hardware but with possible software assistance (multiple register sets); and those intended to operate transparently with main memory implying no software assistance whatsoever (stack buffers). This paper shows that hardware buffering schemes cannot replace compile-time effort, but at most can reduce the complexity of this effort. It shows the utility increase of applying register allocation for multiple register sets. The paper also shows a potential utility decrease inherent to stack buffers. The observation that a single register set, allocated by means of interprocedural allocation, performs competitively with both multiple register set and stack buffer emphasizes the significance of the conclusion