Concurrent programming in Ada
An extension of the language C for concurrent programming
Parallel Computing
Automatic decomposition of scientific programs for parallel execution
POPL '87 Proceedings of the 14th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
Using an architectural knowledge base to generate code for parallel computers
Communications of the ACM - Special issue: multiprocessing
ACM Transactions on Programming Languages and Systems (TOPLAS)
Communicating sequential processes
Communications of the ACM
Optimizing Supercompilers for Supercomputers
Optimizing Supercompilers for Supercomputers
Partitioning and Scheduling Parallel Programs for Multiprocessors
Partitioning and Scheduling Parallel Programs for Multiprocessors
Dependence graphs and compiler optimizations
POPL '81 Proceedings of the 8th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
High-Speed Multiprocessors and Compilation Techniques
IEEE Transactions on Computers
Hi-index | 0.98 |
In this paper, we consider the problem of efficiently writing and using computer programs on large scale parelled computers. Specifically we will propose a model of computation that allows the user to write the program for a single abstract machine, and use the program on small-scale processor systems and large parallel systems alike. A good examples of problem where this migration is necessary are ssequence analysis problems: the comparison of DNA and protein sequences and the visualization of their seconfary and tertiary structure. Although many programs for these problems have been written for personal computers, these same programs cannot be used on parallel machines. This paper has three basic parts. First, there is a basic introduction to sequence analysis and the associated computational problems. Second, there is a general description of computer systems used for these sequence analysis problems. Third, the programming model of a global linear address space is presented followed by examples of its use in sequence analysis programs. These programs run effeciently on personal computers and distributed memory parallel computers. This illustrates that effeciently write one program to solve different size problems on different target architectures.