Fast Parallel Sorting Under LogP: Experience with the CM-5
IEEE Transactions on Parallel and Distributed Systems
A multigrid tutorial (2nd ed.)
A multigrid tutorial (2nd ed.)
Introduction to algorithms
Automatically tuned linear algebra software
SC '98 Proceedings of the 1998 ACM/IEEE conference on Supercomputing
BoomerAMG: a parallel algebraic multigrid solver and preconditioner
Applied Numerical Mathematics - Developments and trends in iterative methods for large systems of equations—in memoriam Rüdiger Weiss
Can cloud computing reach the top500?
Proceedings of the combined workshops on UnConventional high performance computing workshop plus memory access workshop
How is the weather tomorrow?: towards a benchmark for the cloud
Proceedings of the Second International Workshop on Testing Database Systems
Instruction-level simulation of a cluster at scale
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Autotuning multigrid with PetaBricks
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
The bottom-up implementation of one MILC lattice QCD application on the cell blade
International Journal of Parallel Programming
Hi-index | 0.00 |
The NAS parallel benchmarks, originally developed by NASA for evaluating performance of their high-performance computers, have been regarded as one of the most widely used benchmark suites for side-by-side comparisons of high-performance machines. However, even though the NAS parallel benchmarks have grown tremendously in the last two decades, documentation is lagging behind because of rapid changes and additions to the collection of benchmark codes primarily due to rapid innovation of parallel architectures. Consequently, the learning curve for beginning graduate students, researchers, or software systems engineers to pick up these benchmarks is typically huge. In this paper, we document and assess the NAS parallel benchmark suite by identifying parallel patterns within the NAS benchmark codes. We believe that such documentation of the benchmarks will allow researchers as well as those in industry to understand, use and modify these codes more effectively.