Parallel-adaptive simulation with the multigrid-based software framework UG

  • Authors:
  • Stefan Lang

  • Affiliations:
  • University of Heidelberg, Interdisciplinary Center for Scientific Computing, Im Neuenheimer Feld 368, 69120, Heidelberg, Germany

  • Venue:
  • Engineering with Computers
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present design aspects and concepts of the unstructured grids (UG) software framework that are relevant for parallel-adaptive simulation of time-dependent, nonlinear partial differential equations. The architectural design is discussed on system, subsystem and component level for distributed mesh management and local adaptation capabilities. Parallelization is founded on top of the innovative programming model dynamic distributed data (DDD). Newly introduced modules and extensions of DDD are discussed. Local multigrid methods are introduced as optimal linear solvers in the solution process. The demands of local parallel mesh adaptation are further described: Beside a mesh manipulation module further steps dynamic load balancing and migration have to be introduced. Their realization in the context of local multigrid methods is significantly non-trivial and makes the major contribution to the paper presented here. Parallel I/O provides an efficient mechanism for restart, postprocessing and long-term, large-scale computations. The UG approach is verified through a considerable code-reuse fraction of nearly 90% for simulations of complicated phenomena like porous media flow and transport as well as elastoplasticity. Parallel simulations with up to 108 unknowns are shown for the Couplex benchmark. Therefore a grid convergence study to verify the reliability of the computed results is possible. For an parallel-adaptive elastoplasticity computation the speedup of the multigrid solver, which is the most scalability critical simulation part, exceeds on 512 processor a value of 300. The overhead introduced by the parallel-adaptive scheme turns out to be below 10% of the whole simulation time.