Elsevier

Computer Physics Communications

Volume 233, December 2018, Pages 156-166
Computer Physics Communications

Afivo: A framework for quadtree/octree AMR with shared-memory parallelization and geometric multigrid methods

https://doi.org/10.1016/j.cpc.2018.06.018Get rights and content

Abstract

Afivo is a framework for simulations with adaptive mesh refinement (AMR) on quadtree (2D) and octree (3D) grids. The framework comes with a geometric multigrid solver, shared-memory (OpenMP) parallelism and it supports output in Silo and VTK file formats. Afivo can be used to efficiently simulate AMR problems with up to about 108 unknowns on desktops, workstations or single compute nodes. For larger problems, existing distributed-memory frameworks are better suited. The framework has no built-in functionality for specific physics applications, so users have to implement their own numerical methods. The included multigrid solver can be used to efficiently solve elliptic partial differential equations such as Poisson’s equation. Afivo’s design was kept simple, which in combination with the shared-memory parallelism facilitates modification and experimentation with AMR algorithms. The framework was already used to perform 3D simulations of streamer discharges, which required tens of millions of cells.

Program summary

Program Title: Afivo

Program Files doi: http://dx.doi.org/10.17632/5y43rjdmxd.1

Licensing provisions: GPLv3

Programming language: Fortran 2011

External routines/libraries: Silo (LLNL)

Nature of problem: Performing multiscale simulations, especially those requiring a fast elliptic solver.

Solution method: Provide a framework for parallel simulations on adaptively refined quadtree/octree grids, including a geometric multigrid solver.

Unusual features: The framework uses shared-memory parallelism (OpenMP) instead of MPI.

Introduction

Many systems have a multiscale nature, meaning that physical structures occur at different spatial and temporal scales. These structures can appear at different locations and move in space. Numerical simulations of such systems can be speed up with adaptive mesh refinement (AMR), especially if a fine mesh is required in only a small part of the domain. Here we present Afivo (Adaptive Finite Volume Octree), a framework for simulations with AMR on structured grids. Some of the key characteristics of Afivo are

  • Adaptively refined quadtree (2D) and octree (3D) grids

  • OpenMP parallelization

  • A geometric multigrid solver

  • Output in Silo and VTK file formats

  • Source code in Fortran 2011 with GNU GPLv3 license.

An overview of Afivo’s functionality and potential applications is given below, together with a brief discussion of our motivation for developing the framework. An overview of the design, data structures and methods is given in Section 2. An important part is the geometric multigrid solver, which handles refinement boundaries in a consistent way. The implementation of this solver is described in Section 3. Finally, some examples are presented in Section 4.

As a generic simulation framework, Afivo comes without solvers for specific physics problems. A user thus has to implement the required numerical methods as well as a suitable refinement criterion, see Section 2.3. We think Afivo could be used when one wants to investigate numerical discretizations or AMR algorithms, or when no existing simulation software is available for the problem at hand. To demonstrate some of the framework’s possibilities, several examples are included in the examples directory of the source code:

  • Examples showing e.g., how to define the computational domain, perform refinement, set boundary conditions and write output.

  • Solving a scalar advection equation in 2D and 3D using the explicit trapezoidal rule and the Koren flux limiter [1].

  • Solving a time-dependent 2D diffusion/heat equation implicitly using the backward Euler method and geometric multigrid routines.

  • Solving a Laplace/Poisson equation on a Cartesian grid (2D, 3D) or in cylindrical (r,z) coordinates with geometric multigrid.

  • Simulating a destabilizing ionization wave in 2D, see Section 4.3.

  • Mapping particles to densities on a mesh, and interpolating mesh variables to particles.

Dirichlet, Neumann and periodic boundary conditions are supported, but other types of boundary conditions can easily be added. For Dirichlet and Neumann conditions, the user has to provide a routine that specifies the value of the solution/derivative at the boundary. Boundary conditions are implemented through ghost cells, so that numerical methods do not have to be modified near the boundary of a grid block, see Section 1.3. For the same reason, values from neighboring blocks are also communicated through ghost cells. It is the user’s responsibility to ensure that ghost cells are up to date, see Section 2.4.

Afivo is most suited for relatively low order spatial discretizations, e.g. second or third order. The corresponding numerical operators have a small stencil, which reduces the communication overhead due to the adaptive grid. Shared-memory parallelism is employed, which means one can experiment with different AMR methods without having to deal with load balancing or the communication between processors. This is for example relevant when comparing different schemes to fill ghost cells near refinement boundaries. With shared-memory parallelism, Afivo can still be used for problems with tens of millions of unknowns as current hardware often provides 16 or more CPU cores with at least as many gigabytes of RAM.

Afivo is written in modern Fortran, using some of the features of Fortran 2011. The 2D and 3D version of the framework are automatically generated from a set of common source files, using a preprocessor. For example, the module src/m_aX_ghostcell.f90, which contains methods for filling ghost cells, is translated to m_a2_ghostcell.f90 (2D) and m_a3_ghostcell.f90 (3D). Most of Afivo’s methods and types have a prefix a2_ in 2D and a3_ in 3D. The source code is documented using Doxygen, and it comes with a brief user guide. An online version of this documentation is available through https://gitlab.com/MD-CWI-NL/afivo.

Afivo uses a quadtree/octree grid. For simplicity, the description below is for quadtrees in 2D, the generalization to octrees in 3D is straightforward. A quadtree grid in Afivo consists of boxes (i.e., blocks) of N×N cells, with N an even number. A user can for example select to use boxes of 4 × 4 cells. The coarse grid, which defines the computational domain, can then be constructed from one or more of these boxes, see Fig. 1 and Section 2.2.

Two types of variables are stored: cell-centered variables and face-centered variables. When initializing Afivo, the user has to specify how many of these variables are required. For the cell-centered variables, each box has one layer of ghost cells, as discussed in Section 2.4. For each cell-centered variable, users can specify default procedures for filling ghost cells and to perform interpolation and restriction.

A box in a quadtree grid can be refined by adding four child boxes. These children contain the same number of cells but half the grid spacing, so that they together have the same area as their parent. Each of the children can again be refined, and so on, as illustrated in Fig. 1. There can be up to 30 refinement levels in Afivo. So-called proper nesting or 2:1 balance is ensured, which means that neighboring boxes differ by at most one refinement level.

Afivo does not come with built-in refinement criteria. Instead, users have to supply a routine that sets refinement flags for the cells in a box. There are three possible flags: refine, derefine or keep the current refinement. The user’s routine is then automatically called for all relevant boxes of the grid, after which boxes are refined and derefined, see Section 2.3 for details. For simplicity, each mesh adaptation can locally change the refinement level at a location by at most one. After the grid has been modified, the user gets information on which boxes have been removed and which ones have been added.

For each refinement level, Afivo stores three lists: one with all the parent boxes, one with all the leaves (which have no children), and one with both parents and leaves. To perform computations on the boxes, a user can loop over the levels and over these lists in a desired order. Because of the shared memory parallelism, values on the neighbors, parents or children of a box can always be accessed, see Section 2 for details.

There already exist numerous parallel AMR frameworks that operate on structured grids, some of which are listed in Table 1. Some of these frameworks use block-structured grids,1 in which grid blocks can have different sizes (in terms of number of cells). Any octree mesh is also a block-structured mesh, whereas the opposite is not true. The connectivity of an octree mesh is simpler, because each block has the same number of cells, and blocks are always refined in the same way.

We were interested in AMR frameworks that could be used for simulations of streamer discharges (e.g., [[11], [12], [13]]). Such simulations require a fine mesh where the streamers grow, and at every time step Poisson’s equation has to be solved to compute the electric field. In [14], Paramesh was used for streamer simulations, but the main bottleneck was the Poisson solver. Other streamer models (see e.g., [[15], [16], [17]]) faced the same challenge, because the non-local nature of Poisson’s equation makes an efficient parallel solution difficult, especially on adaptively refined grids. Geometric multigrid methods can overcome most of these challenges, as demonstrated in [18], which adapted its multigrid methods from the Gerris Flow Solver [9]. Afivo’s multigrid implementation is discussed in Section 3. Successful applications of Afivo to 3D streamer simulations can be found in [[19], [20]].

Several of the framework listed in Table 1 include multigrid solvers, for example Boxlib, Dendro, Gerris and Ramses. Afivo is different because it is based on shared-memory parallelism and because it is physics-independent (which e.g., Gerris and Ramses are not). Simulations with adaptive mesh refinement often require some experimentation, for example to determine a suitable refinement criterion, to compare multigrid algorithms or to investigate different discretizations near refinement boundaries. Afivo was designed to facilitate such experiments, by keeping the implementation relatively simple:

  • Only shared-memory parallelism is supported, so that data can be accessed directly and no parallel communication or load balancing is required. Note that all of the frameworks listed in Table 1 use MPI (distributed-memory parallelism).

  • Quadtree and octree grids are used, which are probably the simplest grids that support adaptive refinement.

  • Only cell-centered and face-centered variables are supported.

  • The cell-centered variables always have one layer of ghost cells (but more can be obtained).

  • Afivo is application-independent, i.e., it includes no code or algorithms for specific applications.

Because of these simplifications we expect that Afivo can easily be modified, thus providing an option in between the ‘advanced’ distributed-memory codes of Table 1 and uniform grid computations.

Section snippets

Afivo data types and procedures

The most important data types and procedures used in Afivo are described below. Not all details about the implementation can be given here; further information can be found in the code’s documentation.

Multigrid

Elliptic partial differential equations, such as Poisson’s equation, have to be solved in many applications. Multigrid methods [[26], [27], [28], [29]] can be used to solve such equations with great efficiency. The error in the solution is iteratively damped on a hierarchy of grids, with the coarse grids reducing the low frequency (i.e., long wavelength) error components, and the fine grids the high frequency components. When using adaptive mesh refinement on octree grids, geometric multigrid

Examples

Several examples that demonstrate how to use Afivo are included in the examples folder of Afivo’s source code, see Section 1.1. Here we discuss a few of them in detail.

Conclusion & outlook

This paper describes Afivo, a framework for parallel simulations on adaptively refined quadtree/octree grids with a geometric multigrid solver. We have tried to keep the framework simple to facilitate modification, so it can be used to experiment with AMR algorithms and methods. An overview of Afivo’s main data structures and procedures was given, and the included geometric multigrid solvers have been described. We have presented examples of the multigrid convergence and scaling, of a

Acknowledgments

We thank Margreet Nool for her help with the documentation and the examples. While developing Afivo, JT was supported by project 10755 of the Dutch Technology Foundation STW, which is part of the Netherlands Organisation for Scientific Research (NWO), and which is partly funded by the Ministry of Economic Affairs. JT is now supported by postdoctoral fellowship 12Q6117N from Research Foundation – Flanders (FWO) .

References (40)

  • MacNeiceP. et al.

    Comput. Phys. Comm.

    (2000)
  • PopinetS.

    J. Comput. Phys.

    (2003)
  • PancheshnyiS. et al.

    J. Comput. Phys.

    (2008)
  • MontijnC. et al.

    J. Comput. Phys.

    (2006)
  • LiC. et al.

    J. Comput. Phys.

    (2012)
  • LuqueA. et al.

    J. Comput. Phys.

    (2012)
  • KolobovV. et al.

    J. Comput. Phys.

    (2012)
  • LiC. et al.

    J. Comput. Phys.

    (2010)
  • RabieM. et al.

    Comput. Phys. Comm.

    (2016)
  • KorenB.
  • D. Calhoun, Adaptive mesh refinement resources, 2015. URL...
  • ZhangW. et al.

    SIAM J. Sci. Comput.

    (2016)
  • P. Colella, D.T. Graves, J.N. Johnson, N.D. Keen, T.J. Ligocki, D.F. Martin, P.W. McCorquodale, D. Modiano, P.O....
  • HornungR.D. et al.

    Eng. Comput.

    (2006)
  • R.S. Sampath, S.S. Adavani, H. Sundar, I. Lashuk, G. Biros, 2008 SC - International Conference for High Performance...
  • WeinzierlT.

    A Framework for Parallel PDE Solvers on Multiscale Adaptive Cartesian Grids

    (2009)
  • TeyssierR.

    Astron. Astrophys.

    (2002)
  • VitelloP.A. et al.

    Phys. Rev. E

    (1994)
  • YiW.J. et al.

    J. Phys. D: Appl. Phys.

    (2002)
  • EbertU. et al.

    J. Geophys. Res.

    (2010)
  • Cited by (82)

    View all citing articles on Scopus

    This paper and its associated computer program are available via the Computer Physics Communication homepage on ScienceDirect (http://www.sciencedirect.com/science/journal/00104655).

    View full text