GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
SIAM Journal on Scientific and Statistical Computing
Renumbering unstructured grids to improve the performance of codes on hierarchical memory machines
Advances in Engineering Software
A Comparison of Several Bandwidth and Profile Reduction Algorithms
ACM Transactions on Mathematical Software (TOMS)
Algorithm 508: Matrix Bandwidth and Profile Reduction [F1]
ACM Transactions on Mathematical Software (TOMS)
Zoltan Data Management Service for Parallel Dynamic Applications
Computing in Science and Engineering
Parallel Multilevel Graph Partitioning
IPPS '96 Proceedings of the 10th International Parallel Processing Symposium
Reducing the bandwidth of sparse symmetric matrices
ACM '69 Proceedings of the 1969 24th national conference
Metrics and models for reordering transformations
MSP '04 Proceedings of the 2004 workshop on Memory system performance
Exploiting Locality for Irregular Scientific Codes
IEEE Transactions on Parallel and Distributed Systems
Efficient distributed mesh data structure for parallel automated adaptive analysis
Engineering with Computers
Reordering Algorithms for Increasing Locality on Multicore Processors
HPCC '08 Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications
Cache-Oblivious Sparse Matrix-Vector Multiplication by Using Sparse Matrix Partitioning Methods
SIAM Journal on Scientific Computing
Unstructured mesh partition improvement for implicit finite element at extreme scale
The Journal of Supercomputing
Hi-index | 0.00 |
Effective use of the processor memory hierarchy is an important issue in high performance computing. In this work, a part level mesh topological traversal algorithm is used to define a reordering of both mesh vertices and regions that increases the spatial locality of data and improves overall cache utilization during on processor finite element calculations. Examples based on adaptively created unstructured meshes are considered to demonstrate the effectiveness of the procedure in cases where the load per processing core is varied but balanced (e.g., elements are equally distributed across cores for a given partition). In one example, the effect of the current ajacency-based data reordering is studied for different phases of an implicit analysis including element-data blocking, element-level computations, sparse-matrix filling and equation solution. These results are compared to a case where reordering is applied to mesh vertices only. The computations are performed on various supercomputers including IBM Blue Gene (BG/L and BG/P), Cray XT (XT3 and XT5) and Sun Constellation Cluster. It is observed that reordering improves the per-core performance by up to 24% on Blue Gene/L and up to 40% on Cray XT5. The CrayPat hardware performance tool is used to measure the number of cache misses across each level of the memory hierarchy. It is determined that the measured decrease in L1, L2 and L3 cache misses when data reordering is used, closely accounts for the observed decrease in the overall execution time.