Rounding Errors in Algebraic Processes
Rounding Errors in Algebraic Processes
Hi-index | 0.00 |
It is well known in numerical analysis that the computation per iteration in n-line S.O.R. for solving elliptic difference equations increases linearly with n whereas the asymptotic rate of convergence increases only with @@@@n3,4. Hence, it has been said 5 there is no reason to use n-line S.O.R. where n is greater than ten, say. In a modern computing environment, however, the situation is not so simple for problems which are larger than core size. Most computers today offer simultaneous “read-write-compute”, hence the time per iteration cannot be identified with the computation per iteration. A more reasonable optimization technique would be to find the minimum over n of the maximum of compute time, input time, and output time. However, even this technique suffers from the fact that there are trade-offs between these three which can be taken into account. In this paper, we attempt a quantitative determination of optimal n-line S.O.R. in this more complicated setting. The function to be minimized is, of course, machine dependent. It will vary with compute speed, I/O speed, n, and the programming strategy. In section two, we consider a restricted n-line S.O.R. problem in which there is no trade-off between computation and input-output. That is, programming strategy is omitted. In section three, we identify the trade-offs which are allowable, and in sections four and five we consider the problem with programming strategy. Finally, in section six, we apply the results to representative machine configurations.