Language constructs for transactional memory

  • Authors:
  • Tim Harris

  • Affiliations:
  • Microsoft Research, Cambridge, United Kingdom

  • Venue:
  • Proceedings of the 36th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Building concurrent shared-memory data structures is a notoriously difficult problem, and so the widespread move to multi-core and multi-processor hardware has led to increasing interest in language constructs that may make concurrent programming easier. One technique that has been studied widely is the use of atomic blocks built over transactional memory (TM): the programmer marks a section of code as atomic, and the language implementation speculatively executes it using transactions. Transactions can run in parallel so long as they access different data. In this talk I'll introduce some of the challenges that I've seen in building robust implementations of this idea. What are the language design choices that exist? What language features can be used inside atomic blocks, and where can atomic blocks occur? Which uses of atomic blocks should be considered correct, and which uses should be considered "racy"? What are the likely impacts of different design choices on performance? What are the impacts on flexibility for the language implementer, and what are the impacts on flexibility to the programmer using these constructs? I'll argue that one way of trying to resolve these questions is to be rigorous about keeping the ideas of atomic blocks and TM separate; in practice they've often been conflated (not least in languages that I've worked on). I'll argue that, when thinking about atomic blocks, we should keep a wide range of possible implementations in mind (for example, TM, lock inference, or simply control over pre-emption). Similarly, when thinking about TM, we should recognize that it can be exposed to programmers through a wide range of abstractions and language constructs.