Communications of the ACM - Special section on computer architecture
Parallel algorithms and architectures for rule-based systems
ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture
Processor Allocation for Horizontal and Vertical Parallelism and Related Speedup Bounds
IEEE Transactions on Computers
IEEE Transactions on Computers
Firefly: a multiprocessor workstation
ASPLOS II Proceedings of the second international conference on Architectual support for programming languages and operating systems
The sciences of the artificial (3rd ed.)
The sciences of the artificial (3rd ed.)
LocusRoute: a parallel global router for standard cells
DAC '88 Proceedings of the 25th ACM/IEEE Design Automation Conference
A parallel execution model of logic programs
ISCA '83 Proceedings of the 10th annual international symposium on Computer architecture
Queue-based multi-processing LISP
LFP '84 Proceedings of the 1984 ACM Symposium on LISP and functional programming
Automatic distribution of programs and data in a distributed environment
Automatic distribution of programs and data in a distributed environment
Parallelism in production systems
Parallelism in production systems
Hi-index | 0.00 |
Currently, almost all parallel implementations of programs fix the granularity at which parallelism is exploited at design time. Depending on the application structure and the parallel hardware structure, the programmer decides to exploit parallelism at a fine granularity or coarse granularity or some intermediate granularity, but this granularity is not changed at runtime. In this paper we argue that for many applications fixing the granularity in advance is not a good strategy. Instead it is advantageous to decide the granularity at which parallelism is exploited at runtime, as a function of the available hardware resources and as a function of the overheads associated with going to a finer granularity. We present experimental results from a parallel implementation of a geometric constraint satisfaction system to support our thesis. Our results show a significant advantage in using adaptive parallelism.