Transactional prefetching: narrowing the window of contention in hardware transactional memory

  • Authors:
  • Anurag Negi;Adrià Armejach;Adrián Cristal;Osman S. Unsal;Per Stenstrom

  • Affiliations:
  • Chalmers University of Technology, Gothenburg, Sweden;Barcelona Supercomputing Center, Universitat Politècnica de Catalunya, Barcelona, Spain;Barcelona Supercomputing Center, IIIA - Artificial Intelligence Research Institute, Barcelona, Spain;Barcelona Supercomputing Center, Barcelona, Spain;Chalmers University of Technology, Gothenburg, Sweden

  • Venue:
  • Proceedings of the 21st international conference on Parallel architectures and compilation techniques
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Memory access latency is the primary performance bottleneck in modern computer systems. Prefetching data before it is needed by a processing core allows substantial performance gains by overlapping significant portions of memory latency with useful work. Prior work has investigated this technique and measured potential benefits in a variety of scenarios. However, its use in speeding up Hardware Transactional Memory (HTM) has remained hitherto unexplored. In several HTM designs transactions invalidate speculatively updated cache lines when they abort. Such cache lines tend to have high locality and are likely to be accessed again when the transaction re-executes. Coarse grained transactions that update several cache lines are particularly susceptible to performance degradation even under moderate contention. However, such transactions show strong locality of reference, especially when contention is high. Prefetching cache lines with high locality can, therefore, improve overall concurrency by speeding up transactions and, thereby, narrowing the window of time in which such transactions persist and can cause contention. Such transactions are important since they are likely to form a common TM use-case. We note that traditional prefetch techniques may not be able to track such lines adequately or issue prefetches quickly enough. This paper investigates the use of prefetching in HTMs, proposing a simple design to identify and request prefetch candidates, and measures performance gains to be had for several representative TM workloads.