The heart of object-oriented concurrent programming

  • Authors:
  • J. Lim;R. E. Johnson

  • Affiliations:
  • Univ. of Illinois, Urbana;Univ. of Illinois, Urbana

  • Venue:
  • OOPSLA/ECOOP '88 Proceedings of the 1988 ACM SIGPLAN workshop on Object-based concurrent programming
  • Year:
  • 1988

Quantified Score

Hi-index 0.00

Visualization

Abstract

Concurrency has been with us almost from the beginning of computing. Managing and programming for concurrency is a difficult problem and various solutions have been suggested over the years. Debates on message passing vs. remote procedure call, synchronous vs. asynchronous message passing, bounded vs. unbounded buffers, active vs. passive objects etc. still continue. No solution is entirely satisfactory. Concurrent programming usually depends heavily on the nature of the problem at hand and the architecture of the target machine.Object-oriented programming brings new hope for a better solution for concurrent programming. It offers the potential of hiding the details of concurrency in high-level abstractions and using the modularity imposed by an object-oriented design to model the locality that is so important in distributed systems. However, much work in the area simply continues the debates mentioned above or investigates language design or extensions to existing languages. Although it is important to select a good set of primitives for concurrent programming, the exact set that is used will not fundamentally alter the nature of the concurrent programming problem. Continuing the above mentioned debate and trying out different combinations is unlikely to result in a major improvement.These debates have very little to do with object-oriented programming. Designing features for concurrency in OOP languages is not much different from that of other kinds of languages—concurrency is orthogonal to OOP at the lowest levels of abstraction. OOP or not, all the traditional problems in concurrent programming still remain. However, at the highest levels of abstraction, OOP can alleviate the concurrency problem for the majority of programmers by hiding the concurrency inside reusable abstractions.The real issue in object-oriented concurrent programming is not how to introduce concurrency to OOP but what OOP brings to concurrent programming. The essence of OOP is reusable abstractions. For example, the main contribution of OOP to user interface design is that it allows a programmer to ignore I/O details and to focus on the high-level design of the user interface. In the same way, application programmers will not have to be experts in concurrent programming if they can reuse implementations of abstractions that encapsulate low-level details like partitioning, communication, and synchronization.Most programmers do not want to write concurrent programs, they just want to write programs that work and that are fast. Concurrency is necessary both in real-time programming and in programming parallel computers. However, most programmers would prefer to program at a high enough level that they avoid having to be concerned with the low-level details of concurrency. Obviously, someone must discover these high-level abstractions and implement them. But the majority of programmers will be free from the low-level details of concurrent programming and can concentrate on the high-level issues of designing algorithms.Work on data-parallel programming has resulted in a number of useful abstractions. The Connection Machine has spawned a number of them[1]. The paralation model[2] represents a collection of objects as a single entity. A programmer can specify the locality among constituent objects. Paralations make good building blocks for the data parallelism style of programming. The Concurrent Data Structures of William Dally[3] also addresses data parallelism and is one of the few attempts we have seen at designing abstractions that use other kinds of parallelism, as well. However, most people working in the area of object-oriented concurrent programming rarely describe high-level abstractions.The goal of our research is to discover a set of reusable abstractions that encapsulate the details of concurrency. For this scheme to be effective, the users of abstractions must not worry about synchronization among concurrent processes. The architecture of the target machine should be as transparent as possible. Abstractions should be reusable in many applications.It may not be possible to fully realize all our goals. Some data abstractions are much easier to implement on some architectures than on others, so the target architecture will probably always have an influence on some applications. We may not discover a complete set of abstractions that cover every application. However, no software library is sufficient for every task; a library is enough if it helps solve a large fraction of our problems. Even if different target architectures require different high-level abstractions to achieve the greatest performance, we will consider a library to be successful if the complexity of concurrent programming is confined to a few well defined places so that it can be easily managed and adapted to changing environments.We are working with Smalltalk-80 to find reusable abstractions in concurrent programming. Smalltalk-80 provides an ideal environment for discovering abstractions, for organizing them, and for experimenting with them. One natural place to look for concurrency is the collection of data objects. Smalltalk-80 includes two such classes, Collection and Stream. Data parallelism is provided by subclasses of Collection with concurrent implementations of the 'enumerating' protocol. ParallelCollection presents a collection of objects like FORTRAN-8X arrays and APL matrices. A programmer can view a ParallelCollection as a single entity and express concurrent operations on it without worrying about the interactions among the elements of it. Data-flow parallelism is provided by subclasses of Stream with concurrent implementations.Various divide and conquer type algorithms can be abstracted as a WorkPool. Multiple processes take out subproblems from a WorkPool and work on them. Each process may generate more subproblems and put them back in the WorkPool. Workpools are useful for graph search algorithms such as game playing and the traveling salesman problem. We also used it to implement a parallel dataflow analysis algorithm.The abstractions that we discovered are very similar to those used by Dally, although we did not learn about his work until recently. Our favorite reason is that these abstractions are probably fundamental to parallel programming and would be discovered by anyone who approached the problem with the right attitude. A more likely reason is the common influence by Smalltalk. We plan to look into other problem domains to find more reusable abstractions.In conclusion, the most important aspect of object-oriented programming is its ability to support reusable abstraction. Although most papers on the subject of object-oriented concurrent programming emphasize language models, there are probably a number of reusable abstractions that have been discovered by application programmers and left unpublicized. Those abstractions should be publicized, discussed, and refined. Object-oriented programming can make its biggest impact on concurrent systems by generating high-level abstractions that application programmers can use to build parallel systems more easily. We should concentrate more of our effort in finding these abstractions.