Computing with Time-Varying Data: Sequential Complexity and Parallel Speed-Up

  • Authors:
  • F. Luccio;L. Pagli

  • Affiliations:
  • Dipartimento di Informatica, University of Pisa, Corso Italia 40, 56125 Pisa, Italy, Italy;Dipartimento di Informatica, University of Pisa, Corso Italia 40, 56125 Pisa, Italy, Italy

  • Venue:
  • Theory of Computing Systems
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss the design of sequential and parallel algorithms working on a time-increasing data set, within two paradigms of computation. In Paradigm 1 the process terminates when all the data currently arrived have been treated, independently of future arrivals. In Paradigm 2 an endless process is divided in stages, and in each stage the computation is carried out on the data set updated up to the previous stage. A problem may be unsolvable because no algorithm is fast enough to cope with the increasing data set. The computational cost of succeeding algorithms is studied in a new perspective, in the sequential RAM and parallel PRAM models, with the running time possibly tending to infinity for proper values of the parameters. It is shown that the traditional time bounds of parallel versus sequential computation (i.e., speed-up and slow-down under the so-called Brent's principle) do not hold, and new bounds are provided. Several problems are examined in the new paradigms, and the new algorithms are compared with the known ones designed for time-invariant data. Optimal sequential and parallel algorithms are also defined, and given whenever possible. In particular it is shown that some problems do not gain anything from a parallel solution, while others can be practically solved only in parallel. Paradigm 1 is the most innovative, and the relative results on parallel speed-up and scaling are probably unexpected. Paradigm 2 opens a new perspective in dynamic algorithms, because processing batches of data may be more efficient than processing single incoming data on-line.