A survey of high level frameworks in block-structured adaptive mesh refinement packages

https://doi.org/10.1016/j.jpdc.2014.07.001Get rights and content

Highlights

  • A survey of mature openly available state-of-the-art structured AMR libraries and codes.

  • Discussion of their frameworks, challenges and design trade-offs.

  • Directions being pursued by the codes to prepare for the future many-core and heterogeneous platforms.

Abstract

Over the last decade block-structured adaptive mesh refinement (SAMR) has found increasing use in large, publicly available codes and frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, others have pursued a more general functionality, providing the building blocks for a larger variety of applications. In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that have been in existence for half a decade or more, have a reasonably sized and active user base outside of their home institutions, and are publicly available. The set consists of a mix of SAMR packages and application codes that cover a broad range of scientific domains. We look at their high-level frameworks, their design trade-offs and their approach to dealing with the advent of radical changes in hardware architecture. The codes included in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah.

Introduction

Block-structured adaptive mesh refinement (SAMR)  [8], [7] first appeared as a computational technique almost 30 years ago; since then it has been used in many individual research codes and, increasingly over the last decade, in large, publicly available code frameworks and application codes. The first uses of SAMR focused almost entirely on explicit methods for compressible hydrodynamics, and these types of problems motivated the building of many of the large code frameworks. SAMR frameworks have evolved along different paths. Some have stayed focused on specific domain areas, adding large amounts of functionality and problem-specific physics modules that are relevant to those applications. Examples of these include AstroBEAR  [26], CRASH  [91], Cactus  [42], [15], Enzo [14], [38], FLASH  [40], [32], Overture  [76], PLUTO  [68], [89], and Uintah  [77], [78]. Other frameworks have pursued a more general functionality, providing the building blocks for a larger variety of applications while enabling domain-specific codes to be built using that framework. As an example, while almost every SAMR framework can be used to solve systems of hyperbolic conservation laws explicitly, not all frameworks include the functionality to solve elliptic equations accurately on the entire hierarchy or a subset of levels. Examples of frameworks constructed specifically for solving hyperbolic conservation laws include AMROC  [29] and AMRClaw  [5], both based on the wave propagation algorithms of R. LeVeque. Extensions of AMRClaw include GeoClaw  [41], the widely used tsunami simulation tool. BoxLib  [13], Chombo  [23], Jasmine  [73] and SAMRAI  [17], [47] are more general in that they supply full functionality for solving equation sets containing hyperbolic, parabolic and elliptic equations, and facilitate the development of codes for simulating a wide variety of different applications. PARAMESH  [39] supplies only the mesh management capability and as such is equation-independent. A more comprehensive list of codes that use SAMR, and other useful adaptive mesh refinement (AMR) resources can be found at  [46].

SAMR codes all rely on the same fundamental concept, viz. that the solution can be computed in different regions of the domain with different spatial resolutions, where each region at a particular resolution has a logically rectangular structure. In some SAMR codes the data is organized by level, so that the description of the hierarchy is fundamentally defined by the union of blocks at each level; while others organize their data with unique parent–child relationships. Along with the spatial decomposition, different codes solving time-dependent equations make different assumptions about the time stepping, i.e., whether blocks at all levels advance at the same time step, or blocks advance with a time step unique to their level. Finally, even when frameworks are used to solve exactly the same equations with exactly the same algorithm, the performance can vary due to the fact that different frameworks are written in different languages, with different choices of data layout, and also differ in other implementation details. However, despite their differences in infrastructure and target domain applications, the codes have many aspects that are similar, and many of these codes follow a set of very similar software engineering practices.

In this survey paper we examine a representative set of SAMR packages and SAMR-based codes that: (1) have been in existence for half a decade or more, (2) have a reasonably sized and active user base outside of their home institutions, and most importantly, (3) are publicly available to any interested user. In selecting the codes we have taken care to include variations in spatial and temporal refinement practices, load distribution and meta-information management. Therefore, we have octree and patch based SAMR, no subcycling and subcycling done in different ways, load distribution on a level by level or all levels at once, and globally replicated meta-data or local view. Additionally, the set covers a broad range of scientific domains that use SAMR technology in different ways. We look at their high-level frameworks, consider the trade-offs between various approaches, and the challenges posed by the advent of radical changes in hardware architecture. The codes studied in detail in this survey are BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah. The application domains covered by a union of these codes include astrophysics, cosmology, general relativity, combustion, climate science, subsurface flow, turbulence, fluid–structure interactions, plasma physics, and particle accelerators.

Section snippets

Overview of the codes

BoxLib is primarily a framework for building massively parallel SAMR applications. The goals of the BoxLib framework are twofold: first, to support the rapid development, implementation and testing of new algorithms in a massively parallel SAMR framework; and second, to provide the basis for large-scale domain-specific simulation codes to be used for numerical investigations of phenomena in fields such as astrophysics, cosmology, subsurface flow, turbulent combustion, and any other field which

Frameworks

There are a number of similarities between the six codes/software frameworks (which from now on we will call “codes”) described in this paper. Each of these codes provides some generic support for SAMR applications as well as more specialized support for specific applications. Since the codes detailed in the survey come from different disciplines, groups, and scientific domains they each use various terms in their own different ways. In order to facilitate the discussion we override individual

Performance challenges

Use of SAMR provides an effective compression mechanism for the solution data by keeping high resolution only where it is most needed. This data compression comes with certain costs; the management of mesh is more complex with a lot more meta-data, and good performance (scaling) is harder to achieve. The design space is large as observed from the variations found in the SAMR codes. Some of the performance challenges are inherent in using SAMR, for example even load distribution among

Future directions

As we anticipate the move to architectures with increasingly more cores and heterogeneous computational resources per node, both algorithms and software need to evolve. Uintah is perhaps ahead of all the other codes in exploiting newer approaches in programming abstractions. The future plans for SAMR and other multiphysics codes are trending towards removing flexibility from some parts while adding flexibility to other parts. For example a common theme among many patch based codes is to move

Summary and conclusions

The application codes and the infrastructure packages described in this survey provide a snapshot of high level frameworks utilized by multiphysics simulations when using block structured SAMR techniques. The selected set does not claim to be comprehensive; there are many more AMR-based codes and infrastructure packages that are in active use by different communities. Rather, it is representative of the different approaches, capabilities and application areas served by AMR. The codes described

Acknowledgments

BoxLib Much of the BoxLib development over the past 20+ years has been supported by the Applied Mathematics Program and the SciDAC program under the US DOE Office of Science at LBNL under contract No. DE-AC02-05CH11231. Scaling studies of BoxLib have used resources of NERSC and OLCF, which are supported by the Office of Science of the US DOE under Contract No. DE-AC02-05CH11231, and DE-AC05-00OR22725 respectively.

Cactus Cactus is developed with direct and indirect support from a number of

Anshu Dubey is a member of the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. Before joining LBL she was the Associate Director of the Flash Center for Computational Science at the University of Chicago. She received her Ph.D. in Computer Science (1993) from Old Dominion University and B.Tech. in Electrical Engineering from Indian Institute of Technology Delhi (1985). Her research interests are in parallel algorithms, computer architecture and software engineering

References (95)

  • W.D. Henshaw et al.

    Parallel computation of three-dimensional flows using overlapping grids with adaptive mesh refinement

    J. Comput. Phys.

    (2008)
  • S. Husa et al.

    Kranc: a Mathematica application to generate numerical codes for tensorial evolution equations

    Comput. Phys. Commun.

    (2006)
  • F. Miniati et al.

    Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems

    J. Comput. Phys.

    (2007)
  • S.G. Parker

    A component-based architecture for parallel multi-physics PDE simulation

    Future Generation Comput. Syst.

    (2006)
  • D. Sulsky et al.

    A particle method for history-dependent materials

    Comput. Methods Appl. Mech. Engrg.

    (1994)
  • W. Zhang et al.

    CASTRO: a new compressible astrophysical solver. II. Gray radiation hydrodynamics

    Astrophys. J., Suppl.

    (2013)
  • G. Allen, T. Goodale, F. Löffler, D. Rideout, E. Schnetter, E.L. Seidel, Component Specification in the Cactus...
  • G. Allen, F. Löffler, T. Radke, E. Schnetter, E. Seidel, Integrating Web 2.0 technologies with scientific simulation...
  • A.S. Almgren et al.

    CASTRO: a new compressible astrophysical solver. I. hydrodynamics and self-gravity

    Astrophys. J.

    (2010)
  • A.S. Almgren et al.

    Nyx: a massively parallel AMR code for computational cosmology

    Astrophys. J.

    (2013)
  • AMRCLAW, 2009....
  • E. Ateljevich, P. Colella, D. Graves, T. Ligocki, J. Percelay, P. Schwartz, Q. Shu, CFD Modeling in the San Francisco...
  • M. Berger et al.

    An algorithm for point clustering and grid generation

    IEEE Trans. Syst. Man Cybern.

    (1991)
  • M. Berzins

    Status of Release of the Uintah Computational Framework, Tech. Rep. UUSCI-2012-001

    (2012)
  • M. Berzins et al.

    Uintah - A Scalable Framework for Hazard Analysis

  • M. Blazewicz et al.

    From physics model to results: an optimizing framework for cross-architecture code generation

    Sci. Program

    (2013)
  • BoxLib, 2011....
  • G.L. Bryan, M.L. Norman, B.W. O’Shea, T. Abel, J.H. Wise, M.J. Turk, D.R. Reynolds, D.C. Collins, P. Wang, S.W....
  • Cactus developers, Cactus Computational Toolkit, 2013...
  • Carpet developers, Carpet: Adaptive Mesh Refinement for the Cactus Framework, 2013....
  • CASC, SAMRAI Structured Adaptive Mesh Refinement Application Infrastructure...
  • P. Colella, D. Graves, T. Ligocki, D. Modiano, B.V. Straalen, EBC hombo software package for Cartesian grid embedded...
  • P. Colella, D. Graves, T. Ligocki, D. Modiano, B.V. Straalen, EBAMRTools: EBChombo’s adaptive refinement library, 2003....
  • P. Colella, D. Graves, T. Ligocki, D. Modiano, B.V. Straalen, EBAMRGodunov, 2003....
  • P. Colella et al.
  • A.J. Cunningham et al.

    Simulating magnetohydrodynamical flow with constrained transport and adaptive mesh refinement: algorithms and tests of the AstroBEAR code

    Astrophys. J. Suppl. Ser.

    (2009)
  • C. Daley, J. Bachan, S. Couch, A. Dubey, M. Fatenejad, B. Gallagher, D. Lee, K. Weide, Adding shared memory parallelism...
  • M. Day et al.

    Numerical simulation of laminar reacting flows with complex chemistry

    Combust. Theory Modell.

    (2000)
  • R. Deiterding, AMROC-blockstructured adaptive mesh refinement in object-oriented C++,...
  • M.R. Dorr et al.
  • A. Dubey et al.

    Pragmatic optimizations for better scientific utilization of large supercomputers

    Internat. J. High Perform. Comput. Appl.

    (2013)
  • A. Dubey et al.

    Imposing a Lagrangian Particle Framework on an Eulerian Hydrodynamics Infrastructure in FLASH

    ApJ. Supp.

    (2012)
  • A. Dubey, L. Reid, R. Fisher, Introduction to FLASH 3.0 with application to supersonic turbulence, Phys. Scripta T132...
  • C. Earl et al.

    Nebo: A Domain-Specific Language for High-Performance Computing, Technical Report UUCS-12-032

    (2013)
  • EinsteinToolkit maintainers, Einstein Toolkit: Open software for relativistic astrophysics, 2013....
  • Enzo developers, Enzo astrophysical AMR code, 2013....
  • B. Fryxell et al.

    FLASH: an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes

    Astrophys. J., Suppl.

    (2000)
  • Cited by (155)

    • Stable nodal projection method on octree grids

      2024, Journal of Computational Physics
    View all citing articles on Scopus

    Anshu Dubey is a member of the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. Before joining LBL she was the Associate Director of the Flash Center for Computational Science at the University of Chicago. She received her Ph.D. in Computer Science (1993) from Old Dominion University and B.Tech. in Electrical Engineering from Indian Institute of Technology Delhi (1985). Her research interests are in parallel algorithms, computer architecture and software engineering applicable to high performance scientific computing,

    Ann Almgren is a Staff Scientist in the Computing Research Department at Lawrence Berkeley National Laboratory. She received her B.A. in physics from Harvard University, and her Ph.D. in mechanical engineering from the University of California, Berkeley. Her research interests include numerical simulation of complex multiscale physical phenomena, with a current focus on low Mach number fluid dynamics, as well as adaptive mesh refinement techniques for evolving multicore architectures.

    John Bell is a Senior Staff Scientist and leader of the Center for Computational Sciences and Engineering at Lawrence Berkeley National Laboratory. He received his B.S. in mathematics from MIT, and his Ph.D. in mathematics from Cornell University. His research focuses on the development and analysis of numerical methods for partial differential equations arising in science and engineering. He has made contributions in the areas of finite difference methods, numerical methods for low Mach number flows, adaptive mesh refinement, interface tracking and parallel computing. He has also worked on the application of these numerical methods to problems from a broad range of fields including combustion, shock physics, seismology, flow in porous media and astrophysics.

    Martin Berzins is a Professor of Computer Science in the School of Computing at the University of Utah and a member of the Scientific Computing and Imaging Institute there. Martin obtained his B.S. and Ph.D. degrees from the University of Leeds in UK, where he founded the Computational PDEs Unit and became the Research Dean for Engineering. Martin’s research interests lie in algorithms and software for the parallel solution of large scale science and engineering applications.

    Steven R. Brandt currently holds a position as adjunct professor of computer science, and research staff (IT consultant) at the Center for Computation & Technology (CCT) at LSU. He received his Ph.D. at the University of Illinois at Urbana Champaign in 1996. His research interests lie in computational science and high performance computing. Steven R. Brandt co-leads the Cactus development team. His recent work includes adaptive mesh refinement, and GPU acceleration.

    Home page: https://www.cct.lsu.edu/~sbrandt/.

    Greg Bryan is an Associate Professor in the Department of Astronomy at Columbia University. He received his B.Sc. in physics from the University of Calgary (Canada) and his Ph.D. from the University of Illinois. His interest is in theoretical and computational astrophysics —in particular, computational structure formation. He is the original creator of the Enzo code (http://enzo-project.org) and is currently one of the lead developers.

    Phil Colella is a Senior Scientist and leads the Applied Numerical Algorithms Group in the Computational Research Division at the Lawrence Berkeley National Laboratory, and a Professor in Residence in the Electrical Engineering and Computer Science Department at UC Berkeley. Received his A.B. (1974), M.A. (1976) and Ph.D. (1979) degrees from the University of California at Berkeley, all in applied mathematics. He has developed high-resolution and adaptive numerical algorithms for partial differential equations and numerical simulation capabilities for a variety of applications in science and engineering. He has also participated in the design of high-performance software infrastructure for scientific computing, including software libraries, frameworks, and programming languages.

    Daniel Graves is a member of the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. He received his B.S. in Mechanical Engineering from University of New Hampshire and his Ph.D. in Mechanical Engineering from University of California, Berkeley. He is one of core developers of Chombo.

    Michael Lijewski is a staff member of the Center for Computational Sciences and Engineering at Lawrence Berkeley National Laboratory and is one of the principal architects and developers of the BoxLib software framework. He received his B.S. in Mathematics from Illinois Wesleyan University, and his M.S. in Applied Mathematics from the University of Maryland, College Park. His areas of expertise include algorithm and software design for current and future multicore architectures.

    Frank Löffler currently holds a position as research staff (IT consultant) at the Center for Computation & Technology (CCT) at LSU. He received his Ph.D. at the Max-Planck-Institute Potsdam (Albert-Einstein-Institute), Germany in 2005. His research interests lie in computational science and applications in physics and chemistry. Löffler co-leads the Cactus development team at LSU and is the head maintainer of the Einstein Toolkit. His recent work includes adaptive mesh refinement improvements, simulations of single and binary neutron stars, and path integral calculations of atoms and molecules.

    Home page: https://www.cct.lsu.edu/~knarf/index.html.

    Brian O’Shea is an Assistant Professor of Physics and Astronomy at Michigan State University. He received his Ph.D. in physics at the University of Illinois in 2005. His research interests lie in theoretical and computational astrophysics, and he specializes in the study of galaxies and the intergalactic medium using high-dynamic-range, multiphysics cosmological simulations as well as in performance optimization of large AMR simulations of self-gravitating fluid flow. He is one of the lead developers of the Enzo astrophysics code (http://enzo-project.org).

    Home page: http://www.pa.msu.edu/~osheabr/.

    Erik Schnetter is Research Technologies Group Lead at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada. He received his Ph.D. at the Eberhard-Karls-Universität Tübingen, Germany in 2003. His research interests lie in computational science, in using computers as tools to solve scientific and engineering problems. His recent work includes adaptive mesh refinement and multi-block methods for relativistic astrophysics simulations, software frameworks and automated code generation, as well as sustainable performance optimization for modern high-performance computing hardware architectures.

    Home page: http://www.perimeterinstitute.ca/personal/eschnetter/.

    Brian Van Straalen is a member of the Applied Numerical Algorithms Group at Lawrence Berkeley National Laboratory. He received his BAsc in Mechanical Engineering (1993) and his MMath in Applied Mathematics (1995) from the University of Waterloo, Canada and is a Ph.D. candidate at UC Berkeley in Computer Science. His research focuses on software engineering for high performance scientific computing.

    Klaus Weide is a Research Professional at the Flash Center for Computational Science at the University of Chicago. He received Masters and doctoral degrees (1992) in physics from the University of Göttingen in Germany. He did research work on the molecular dynamics of small molecules first at the Max Planck Institute for Fluid Dynamics in Göttingen and, after 1993, at the University of Chicago. Klaus joined the Center for Thermonuclear Astrophysical Flashes at the University of Chicago in 2006 and since then has been working on extending, porting, maintaining, and supporting the Center’s FLASH code.

    View full text