Efficient parallel programming in Poly/ML and Isabelle/ML

  • Authors:
  • David C.J. Matthews;Makarius Wenzel

  • Affiliations:
  • Prolingua Ltd, Edinburgh, Scotland Uk;Technische Universität München, Garching b. München, Germany

  • Venue:
  • Proceedings of the 5th ACM SIGPLAN workshop on Declarative aspects of multicore programming
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ML family of languages and LCF-style interactive theorem proving have been closely related from their beginnings about 30 years ago. Here we report on a recent project to adapt both the Poly/ML compiler and the Isabelle theorem prover to current multicore hardware. Checking theories and proofs in typical Isabelle application takes minutes or hours, and users expect to make efficient use of "home machines" with 2-8 cores, or more. Poly/ML and Isabelle are big and complex software systems that have evolved over more than two decades. Faced with the requirement to deliver a stable and efficient parallel programming environment, many infrastructure layers had to be reworked: from low-level system threads to high-level principles of value-oriented programming. At each stage we carefully selected from the many existing concepts for parallelism, and integrated them in a way that fits smoothly into the idea of purely functional ML with the addition of synchronous exceptions and asynchronous interrupts. From the Isabelle/ML perspective, the main concept to manage parallel evaluation is that of "future values". Scheduling is implicit, but it is also possible to specify dependencies and priorities. In addition, block-structured groups of futures with propagation of exceptions allow for alternative functional evaluation (such as parallel search), without requiring user code to tackle concurrency. Our library also provides the usual parallel combinators for functions on lists, and analogous versions on prover tactics. Despite substantial reorganization in the background, only minimal changes are occasionally required in user ML code, and none at the Isabelle application level (where parallel theory and proof processing is fully implicit). The present implementation is able to address more than 8 cores effectively, while the earlier version of the official Isabelle2009 release works best for 2-4 cores. Scalability beyond 16 cores still poses some extra challenges, and will require further improvements of the Poly/ML runtime system (heap management and garbage collection), and additional parallelization of Isabelle application logic.