Algebraic multigrid theory: The symmetric case
Applied Mathematics and Computation - Second Copper Mountain conference on Multigrid methods Copper Mountain, Colorado
A simple parallel algorithm for the maximal independent set problem
SIAM Journal on Computing
A parallel graph coloring heuristic
SIAM Journal on Scientific Computing
SIAM Journal on Scientific Computing
Iterative linear solvers in a 2D radiation-hydrodynamics code: methods and performance
Journal of Computational Physics
Semicoarsening Multigrid on Distributed Memory Machines
SIAM Journal on Scientific Computing
Parallel smoothed aggregation multigrid: aggregation strategies on massively parallel machines
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Parallel Algebraic Multigrid Methods on Distributed Memory Computers
SIAM Journal on Scientific Computing
Algebraic Multigrid Based on Element Interpolation (AMGe)
SIAM Journal on Scientific Computing
BoomerAMG: a parallel algebraic multigrid solver and preconditioner
Applied Numerical Mathematics - Developments and trends in iterative methods for large systems of equations—in memoriam Rüdiger Weiss
hypre: A Library of High Performance Preconditioners
ICCS '02 Proceedings of the International Conference on Computational Science-Part III
Coarse-Grid Selection for Parallel Algebraic Multigrid
IRREGULAR '98 Proceedings of the 5th International Symposium on Solving Irregularly Structured Problems in Parallel
SIAM Journal on Scientific Computing
SIAM Journal on Scientific Computing
A parallel solution of large-scale heat equation based on distributed memory hierarchy system
ICA3PP'10 Proceedings of the 10th international conference on Algorithms and Architectures for Parallel Processing - Volume Part II
Hi-index | 0.00 |
Algebraic multigrid (AMG) algorithm is well known for it efficiencies in solving of larger scale sparse linear systems arising from the computationally challenging applications especially on unstructured or deformed structured grid. Though most of its components can be parallelized in a straightforward way, the classical coarsening process such as the Ruge-Stuben (RS) strategy is highly sequential and requires new parallel approaches. In recent years, many parallel coarsening strategies are presented towards running efficiently on hundreds or thousands of processors. This paper presents two new parallel coarsening strategies towards more efficiently distributing C-points for smaller operator complexity and more robust convergence of iterations. The main idea of these strategies is to smartly synchronize processors for the well-known RS0 or CLJP strategies during the process of coarsening. Qualitative analyses and numerical experiments show that our new strategies always perform better performance not only for faster convergence but also for smaller operator complexity if we compare them with the currently well-known parallel coarsening strategies such as RS3, Falgout or CLJP using hundreds of processors.