Parallel thinking

  • Authors:
  • Guy E. Blelloch

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA, USA

  • Venue:
  • Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Assuming that the multicore revolution plays out the way the microprocessor industry expects, it seems that within a decade most programming will involve parallelism at some level. One needs to ask how this affects the the way we teach computer science, or even how we have people think about computation. With regards to teaching there seem to be three basic choices: (1) we only train a small number of experts in parallel computation who develop a collection of libraries, and everyone else just uses them; (2) we leave our core curriculum pretty much as is, but add some advanced courses on parallelism or perhaps tack on a few lectures at the end of existing courses; or (3) we start teaching parallelism from the start and embed it throughout the curriculum with the idea of getting students to think about parallelism as the most natural form of computation and sequential computation as a special case. This talk will examine some of the implications of the third option. It will argue that thinking about parallelism, when treated in an appropriate way, might be as easy or easier that thinking sequentially. A key prerequisite, however, is to identify what the core ideas in parallelism are and how they might be layered and integrated with existing concepts. Another more difficult issue is how to cleanly integrate these ideas among courses. After all much of the success of sequential computation follows from the concept of a random access machine and its ability to serve as a simple, albeit imperfect, interface between programming languages, algorithm analysis, and hardware design. The talk will go through an initial list of some core ideas in parallelism, and an approach to integrating these ideas between parallel algorithms, programming languages, and, to some extent, hardware. This requires, however, moving away from the concept of a machine model as a interface for thinking about computation.