On the construction of parallel computers from various bases of Boolean functions
Theoretical Computer Science
Computer science: a modern introduction (2nd ed.)
Computer science: a modern introduction (2nd ed.)
Parallel computation with threshold functions
Proc. of the conference on Structure in complexity theory
Journal of the ACM (JACM)
A universal interconnection pattern for parallel computers
Journal of the ACM (JACM)
ACM Transactions on Programming Languages and Systems (TOPLAS)
The Design and Analysis of Computer Algorithms
The Design and Analysis of Computer Algorithms
Parallelism in random access machines
STOC '78 Proceedings of the tenth annual ACM symposium on Theory of computing
The circuit value problem is log space complete for P
ACM SIGACT News
Time space tradeoffs in vector algorithms for APL functions
ACM SIGPLAN Notices
Hi-index | 0.00 |
It is reasonable to expect parallel machines to be faster than sequential ones. But exactly how much faster do we expect them to be? Various authors have observed that an exponential speedup is possible if sufficiently many processors are available. One such author has claimed (erroneously) that this is a counterexample to the parallel computation thesis. We show that even more startling speedups are possible. In fact, if enough processors are used, any recursive function can be computed in constant time. Although such machines clearly do not obey the parallel computation thesis, we argue that they still provide evidence in favour of it. In contrast, we show that an arbitrary polynomial speedup of sequential machines is possible on a model which satisfies the parallel computation thesis. If, as widely conjectured, P⊈POLYLOGSPACE, then there can be no exponential speedup on such a model.