Current proposals for parallel C++

  • Authors:
  • Howard L. Operowsky

  • Affiliations:
  • IBM T. J. Watson Research Center

  • Venue:
  • CASCON '95 Proceedings of the 1995 conference of the Centre for Advanced Studies on Collaborative research
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

As powerful microprocessors become increasingly cheaper, there is a growing trend towards distributed processing on groups of these processors, both in the forms of networks of workstations, as well as distributed-memory computers where each processing node has its own local memory, and there is no global memory. Parallel applications are constructed by starting a program on each of the nodes in the network or distributed-memory computer, and having the individual programs communicate by sending messages to each other. Common message-passing libraries, such as MPI, PVM, and P4, are available to enhance portability across different platforms.There is an obvious analogy between sending messages between the nodes involved in these parallel applications and sending messages between objects in an object-oriented application. But programming with message passing libraries is cumbersome. Programs are easier to write when the programmer can use a familiar language.With this in mind, there has been much work on extending object-oriented languages to support parallelism, especially C++. However, there are many differences between the various proposals. Some use language extensions and require compiler support, while others use class libraries. Some support control parallelism, while others support data parallelism. In this presentation, we describe the features of some competing proposals for adding parallelism to C++. We also code a simple example in each of the proposals.