Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Analyzing scalability of parallel algorithms and architectures
Journal of Parallel and Distributed Computing - Special issue on scalability of parallel algorithms and architectures
Algorithmic Redistribution Methods for Block-Cyclic Decompositions
IEEE Transactions on Parallel and Distributed Systems
Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures
IEEE Parallel & Distributed Technology: Systems & Technology
Neville elimination on multi- and many-core systems: OpenMP, MPI and CUDA
The Journal of Supercomputing
Hi-index | 0.00 |
The scalability of a parallel system is a measure of its capacity to effectively use an increasing number of processors. Both the isoefficiency function and the scaled efficiency are metrics used to analyse the scalability of parallel algorithms and architectures. The first function relates the size of the problem being solved to the number of processors required to maintain efficiency at a fixed value, while the second function shows how an algorithm scales when both the size of the problem and the number of processors are increased. This paper models and measures the parallel scalability of the Neville method when a checkerboard partitioning is performed. Neville elimination is a method for solving a system of linear equations. This process appears naturally when the Neville strategy of interpolation is used to solve linear systems. The scaled efficiency of some algorithms of this method is studied on an IBM SP2 and also over an HP cluster.