Large scale parallel structured AMR calculations using the SAMRAI framework
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
Dynamic load balancing of SAMR applications on distributed systems
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
BoomerAMG: a parallel algebraic multigrid solver and preconditioner
Applied Numerical Mathematics - Developments and trends in iterative methods for large systems of equations—in memoriam Rüdiger Weiss
Enhancing scalability of parallel structured AMR calculations
ICS '03 Proceedings of the 17th annual international conference on Supercomputing
Parallel clustering algorithms for structured AMR
Journal of Parallel and Distributed Computing
An adaptive mesh refinement benchmark for modern parallel programming languages
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Scalable adaptive mantle convection simulation on petascale supercomputers
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Uintah: a scalable framework for hazard analysis
Proceedings of the 2010 TeraGrid Conference
p4est: Scalable Algorithms for Parallel Adaptive Mesh Refinement on Forests of Octrees
SIAM Journal on Scientific Computing
Hi-index | 0.00 |
Cello is a highly-scalable object-oriented adaptive mesh refinement (AMR) framework currently under development. While Cello is expected to be usable within multiple scientific problem domains, we specifically target specialized requirements of astrophysics and cosmology applications. Development of Cello is funded by the National Science Foundation (PHY-1104819, AST-0808184) The Cello project grew from the need to address scalability in the parallel AMR astrophysics and cosmology application Enzo [12, 41, 45]. Enzo has a long proven track record in producing new scientific results [1, 30, 36, 47]; however, its AMR design and implementation have several known scaling issues that are difficult to address without costly and invasive changes to the code. This has made it progressively more difficult for Enzo to take full advantage of the compute power of current high-end HPC platforms. While development still continues on improving Enzo's scalability, we are also implementing Enzo's physics capabilities using the Cello scalable AMR framework. The resulting "petascale" version of Enzo is called Enzo-P. In this paper we elaborate on the known scaling issues in Enzo, and how we plan to address these and other scaling issues in Cello through a combination of existing and novel approaches. Two of the more fundamental changes are to incorporate process virtualization and data-driven execution by using Charm++ [34], and to use a new variant of the "forest-of-octrees" approach for the AMR infrastructure.