Introduction to computer engineering
Introduction to computer engineering
Upper and lower time bounds for parallel random access machines without simultaneous writes
SIAM Journal on Computing
New Classes for Parallel Complexity: A Study of Unification and Other Complete Problems for P
IEEE Transactions on Computers
A model for hierarchical memory
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Large-scale sorting in parallel memories (extended abstract)
SPAA '91 Proceedings of the third annual ACM symposium on Parallel algorithms and architectures
Parallel algorithms for shared-memory machines
Handbook of theoretical computer science (vol. A)
Can parallel algorithms enhance serial implementation
ACM SIGACT News
Data-movement-intensive problems: two folk theorems in parallel computation revisited
Theoretical Computer Science
Studies of self-organizing placement algorithms for linear and finger lists
Information Sciences: an International Journal
On-Line Algorithms Versus Off-Line Algorithms: How Much is it Worth to Know the Future?
Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture - Information Processing '92, Volume 1 - Volume I
Parallel Dictionaries in 2-3 Trees
Proceedings of the 10th Colloquium on Automata, Languages and Programming
Theory, Volume 1, Queueing Systems
Theory, Volume 1, Queueing Systems
Approximate and exact parallel scheduling with applications to list, tree and graph problems
SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science
Hierarchical memory with block transfer
SFCS '87 Proceedings of the 28th Annual Symposium on Foundations of Computer Science
Sparsification-a technique for speeding up dynamic graph algorithms
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
The p-shovelers problem-computing with time-varying data
SPDP '92 Proceedings of the 1992 Fourth IEEE Symposium on Parallel and Distributed Processing
Hi-index | 0.00 |
We discuss the design of sequential and parallel algorithms working on a time-increasing data set, within two paradigms of computation. In Paradigm 1 the process terminates when all the data currently arrived have been treated, independently of future arrivals. In Paradigm 2 an endless process is divided in stages, and in each stage the computation is carried out on the data set updated up to the previous stage. A problem may be unsolvable because no algorithm is fast enough to cope with the increasing data set. The computational cost of succeeding algorithms is studied in a new perspective, in the sequential RAM and parallel PRAM models, with the running time possibly tending to infinity for proper values of the parameters. It is shown that the traditional time bounds of parallel versus sequential computation (i.e., speed-up and slow-down under the so-called Brent's principle) do not hold, and new bounds are provided. Several problems are examined in the new paradigms, and the new algorithms are compared with the known ones designed for time-invariant data. Optimal sequential and parallel algorithms are also defined, and given whenever possible. In particular it is shown that some problems do not gain anything from a parallel solution, while others can be practically solved only in parallel. Paradigm 1 is the most innovative, and the relative results on parallel speed-up and scaling are probably unexpected. Paradigm 2 opens a new perspective in dynamic algorithms, because processing batches of data may be more efficient than processing single incoming data on-line.