Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter July 25, 2020

The deal.II library, Version 9.2

  • Daniel Arndt , Wolfgang Bangerth , Bruno Blais , Thomas C. Clevenger , Marc Fehling , Alexander V. Grayver , Timo Heister EMAIL logo , Luca Heltai , Martin Kronbichler , Matthias Maier , Peter Munch , Jean-Paul Pelteret , Reza Rastak , Ignacio Tomas , Bruno Turcksin , Zhuoran Wang and David Wells

Abstract

This paper provides an overview of the new features of the finite element library deal.II, version 9.2.

JEL Classification: 65M60; 65N30; 65Y05
4

4 Acknowledgments

This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy.

Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This document describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.

deal.II is a world-wide project with dozens of contributors around the globe. Other than the authors of this paper, the following people contributed code to this release:

Pasquale Africa, Ashna Aggarwal, Giovanni Alzetta, Mathias Anselmann, Kirana Bergstrom, Manaswinee Bezbaruah, Benjamin Brands, Yong-Yong Cai, Fabian Castelli, Joshua Christopher, Ester Comellas, Katherine Cosburn, Denis Davydov, Elias Dejene, Stefano Dominici, Brett Dong, Luel Emishaw, Niklas Fehn, Isuru Fernando, Rebecca Fildes, Menno Fraters, Andres Galindo, Daniel Garcia-Sanchez, Rene Gassmoeller, Melanie Gerault, Nicola Giuliani, Brandon Gleeson, Anne Glerum, Krishnakumar Gopalakrishnan, Graham Harper, Mohammed Hassan, Nicole Hayes, Bang He, Johannes Heinz, Jiuhua Hu, Lise-Marie Imbert-Gerard, Manu Jayadharan, Daniel Jodlbauer, Marie Kajan, Guido Kanschat, Alexander Knieps, Uwe Köcher, Paras Kumar, Konstantin Ladutenko, Charu Lata, Adam Lee, Wenyu Lei, Katrin Mang, Mae Markowski, Franco Milicchio, Adriana Morales Miranda, Bob Myhill, Emily Novak, Omotayo Omosebi, Alexey Ozeritskiy, Rebecca Pereira, Geneva Porter, Laura Prieto Saavedra, Roland Richter, Jonathan Robey, Irabiel Romero, Matthew Russell, Tonatiuh Sanchez-Vizuet, Natasha S. Sharma, Doug Shi-Dong, Konrad Simon, Stephanie Sparks, Sebastian Stark, Simon Sticko, Jan Philipp Thiele, Jihuan Tian, Sara Tro, Ferdinand Vanmaele, Michal Wichrowski, Julius Witte, Winnifried Wollner, Ming Yang, Mario Zepeda Aguilar, Wenjuan Zhang, Victor Zheng.

Their contributions are much appreciated!

deal.II and its developers are financially supported through a variety of funding sources:

D. Arndt and B. Turcksin: Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy.

W. Bangerth, T. C. Clevenger, and T. Heister were partially supported by the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Award No. EAR-1550901 and The University of California — Davis.

W. Bangerth was also partially supported by award OAC-1835673 as part of the Cyberinfrastructure for Sustained Scientific Innovation (CSSI) program, DMS-1821210, and EAR-1925595.

B. Blais was partially supported by the National Science and Engineering Research Council of Canada (NSERC) through the RGPIN-2020-04510 Discovery Grant.

T. C. Clevenger was also partially supported EAR-1925575 and OAC-2015848.

A. V. Grayver was partially supported by the European Space Agency Swarm DISC program.

T. Heister was also partially supported by the National Science Foundation (NSF) Award DMS-2028346, OAC-2015848, EAR-1925575, and by Technical Data Analysis, Inc. through US Navy STTR Contract N68335-18-C-0011.

L. Heltai was partially supported by the Italian Ministry of Instruction, University and Research (MIUR), under the 2017 PRIN project NA-FROM-PDEs MIUR PE1, ‘Numerical Analysis for Full and Reduced Order Methods for the efficient and accurate solution of complex systems governed by Partial Differential Equations’.

M. Kronbichler was supported by the German Research Foundation (DFG) under the project ‘High-order discontinuous Galerkin for the exa-scale’ (ExaDG) within the priority program ‘Software for Exascale Computing’ (SPPEXA) and the Bayerisches Kompetenznetzwerk für Technisch–Wissenschaftliches Hoch- und Höchstleistungsrechnen (KONWIHR) in the context of the project ‘Performance tuning of high-order discontinuous Galerkin solvers for SuperMUC-NG’.

M. Maier was partially supported by ARO MURI Award No. W911NF-14-0247 and NSF Award DMS-1912847.

D. Wells was supported by the National Science Foundation (NSF) award OAC-1450327.

Z. Wang was partially supported by the National Science Foundation under award OAC-1835673.

The Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University has provided hosting services for the deal.II web page.

The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing access to HPC resources that have contributed to the research results reported within this paper.

Clemson University is acknowledged for generous allotment of compute time on the Palmetto cluster.

References

[1] G. Alzetta, D. Arndt, W. Bangerth, V. Boddu, B. Brands, D. Davydov, R. Gassmoeller, T. Heister, L. Heltai, K. Kormann, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells, The deal.II library, Version 9.0, J. Numer. Math. 26 (2018), No. 4, 173–184.10.1515/jnma-2018-0054Search in Google Scholar

[2] P. R. Amestoy, I. S. Duf, J. Koster, and J.-Y. L’Excellent, A fully asynchronous multifrontal solver using distributed dynamic scheduling, SIAM J. Matrix Anal. Appl. 23 (2001), No. 1, 15–41.10.1137/S0895479899358194Search in Google Scholar

[3] P. R. Amestoy, I. S. Duf, and J.-Y. L’Excellent, Multifrontal parallel distributed symmetric and unsymmetric solvers, Comput. Methods Appl. Mech. Engrg. 184 (2000), 501–520.10.1016/S0045-7825(99)00242-XSearch in Google Scholar

[4] P. R. Amestoy, A. Guermouche, J.-Y. L’Excellent, and S. Pralet, Hybrid scheduling for the parallel solution of linear systems, Parallel Computing32 (2006), No. 2, 136–156.10.1016/j.parco.2005.07.004Search in Google Scholar

[5] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users’ Guide, third ed, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1999.10.1137/1.9780898719604Search in Google Scholar

[6] D. Arndt, W. Bangerth, T. C. Clevenger, D. Davydov, M. Fehling, D. Garcia-Sanchez, G. Harper, T. Heister, L. Heltai, M. Kronbichler, R. M. Kynch, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells, The deal.II library, Version 9.1, J. Numer. Math. 27 (2019), No. 4, 203–213.10.1515/jnma-2019-0064Search in Google Scholar

[7] D. Arndt, W. Bangerth, D. Davydov, T. Heister, L. Heltai, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells, The deal.II inite element library: Design, features, and insights, Computers – Mathematics with Applicationsin press (2020).Search in Google Scholar

[8] D. Arndt, N. Fehn, G. Kanschat, K. Kormann, M. Kronbichler, P. Munch, W. A. Wall, and J. Witte, ExaDG – high-order discontinuous Galerkin for the exa-scale, In: Software for Exascale Computing – SPPEXA 2016–2019 (Eds. H.-J. Bungartz, W. E. Nagel, S. Reiz, B. Uekermann, and Ph. Neumann), Lecture Notes in Computational Science and Engineering, Vol. 136, Springer, Cham, 2020.10.1007/978-3-030-47956-5_8Search in Google Scholar

[9] I. Babuška and M. Suri, The p- and h-p versions of the inite element method, an overview, Comp. Meth. Appl. Mechanics Engrg. 80 (1990), No. 1, 5–26.10.1016/0045-7825(90)90011-ASearch in Google Scholar

[10] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. Curfman McInnes, R. Mills, T. Munson, K. Rupp, P. Sanan B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc Users Manual, Argonne National Laboratory, Report No. ANL-95/11 Revision 3.9, 2018.10.2172/1409218Search in Google Scholar

[11] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. Curfman McInnes, R. Mills, T. Munson, K. Rupp, P. Sanan B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc Web Page, http://www.mcs.anl.gov/petsc, 2018.Search in Google Scholar

[12] W. Bangerth, C. Burstedde, T. Heister, and M. Kronbichler, Algorithms and data structures for massively parallel generic adaptive inite element codes, ACM Trans. Math. Softw. 38 (2011), 14/1–28.10.1145/2049673.2049678Search in Google Scholar

[13] W. Bangerth, R. Hartmann, and G. Kanschat, deal.II – a general purpose object oriented inite element library, ACM Trans. Math. Softw. 33 (2007), No. 4.10.1145/1268776.1268779Search in Google Scholar

[14] W. Bangerth and O. Kayser-Herold, Data Structures and Requirements for hp Finite Element Software, ACM Trans. Math. Softw. 36 (2009), No. 1, 4/1–4/31.10.1145/1486525.1486529Search in Google Scholar

[15] L. S. Blackford, J. Choi, A. Cleary, E. D’Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK Users’ Guide, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997.10.1137/1.9780898719642Search in Google Scholar

[16] J. L. Blanco and P. K. Rai, Nanoflann: a C++ Header-Only Fork of FLANN, a Library for Nearest Neighbor (NN) with KD-Trees, https://github.com/jlblancoc/nanoflann, 2014.Search in Google Scholar

[17] S. C. Brenner and L.-Y. Sung, C0 interior penalty methods for fourth order elliptic boundary value problems on polygonal domains, J. Sci. Comp. 22-23 (2005), No. 1-3, 83–118.10.1007/s10915-004-4135-7Search in Google Scholar

[18] C. Burstedde, L. C. Wilcox, and O. Ghattas, p4est: Scalable algorithms for parallel adaptive mesh reinement on forests of octrees, SIAM J. Sci. Comput. 33 (2011), No. 3, 1103–1133.10.1137/100791634Search in Google Scholar

[19] T. C. Clevenger and T. Heister, Comparison between Algebraic and Matrix-free Geometric Multigrid for a Stokes Problem, submitted (2019).Search in Google Scholar

[20] T. C. Clevenger, T. Heister, G. Kanschat, and M. Kronbichler, A Flexible, Parallel, Adaptive Geometric Multigrid Method for FEM, arXiv:1904.03317, Report, 2019.Search in Google Scholar

[21] CuSOLVER Library, https://docs.nvidia.com/cuda/cusolver/index.html.Search in Google Scholar

[22] CuSPARSE Library, https://docs.nvidia.com/cuda/cusparse/index.html.Search in Google Scholar

[23] T. A. Davis, Algorithm 832: UMFPACK V4.3–an unsymmetric-pattern multifrontal method, ACM Trans. Math. Softw. 30 (2004), 196–199.10.1145/992200.992206Search in Google Scholar

[24] D. Davydov, T. Gerasimov, J.-P. Pelteret, and P. Steinmann, Convergence study of the h-adaptive PUM and the hp-adaptive FEM applied to eigenvalue problems in quantum mechanics, Adv. Modeling Simul. Engrg. Sci. 4 (2017), No. 1, 7.10.1186/s40323-017-0093-0Search in Google Scholar PubMed PubMed Central

[25] A. DeSimone, L. Heltai, and C. Manigrasso, Tools for the Solution of PDEs Deined on Curved Manifolds with Deal.II, SISSA, Report No. 42/2009/M, 2009.Search in Google Scholar

[26] T. Eibner and J. M. Melenk, An adaptive strategy for hp-FEM based on testing for analyticity, Comp. Mechanics39 (2007), No. 5, 575–595.10.1007/s00466-006-0107-0Search in Google Scholar

[27] M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, P. Alken, M. Booth, F. Rossi, and R. Ulerich, GNU Scientiic Library Reference Manual (Edition 2.3), 2016.Search in Google Scholar

[28] R. Gassmöller, H. Lokavarapu, E. Heien, E. Gerry Puckett, and W. Bangerth, Flexible and scalable particle-in-cell methods with adaptive mesh reinement for geodynamic computations, Geochemistry, Geophysics, Geosystems19 (2018), No. 9, 3596–3604.10.1029/2018GC007508Search in Google Scholar

[29] C. Geuzaine and J.-F. Remacle, Gmsh: A 3-D inite element mesh generator with built-in pre-and post-processing facilities, Int. J. Numer. Methods Engrg. 79 (2009), No. 11, 1309–1331.10.1002/nme.2579Search in Google Scholar

[30] Ginkgo: High-Performance Linear Algebra Library for Manycore Systems, https://github.com/ginkgo-project/ginkgo.Search in Google Scholar

[31] N. Giuliani, A. Mola, and L. Heltai, π-BEM: A flexible parallel implementation for adaptive, geometry aware, and high order boundary element methods, Adv. Engrg. Software121 (2018), No. March, 39–58.10.1016/j.advengsoft.2018.03.008Search in Google Scholar

[32] W. J. Gordon and L. C. Thiel, Transinite mappings and their application to grid generation, Appl. Math. Comput. 10 (1982), 171–233.10.1016/0096-3003(82)90191-6Search in Google Scholar

[33] A. Griewank, D. Juedes, and J. Utke, Algorithm 755: ADOL-C: a package for the automatic diferentiation of algorithms written in C/C++, ACM Trans. Math. Software22 (1996), No. 2, 131–167.10.1145/229473.229474Search in Google Scholar

[34] L. Heltai, W. Bangerth, M. Kronbichler, and A. Mola, Using Exact Geometry Information in Finite Element Computations, arXiv:1910.09824, Report, 2019.Search in Google Scholar

[35] V. Hernandez, J. E. Roman, and V. Vidal, SLEPc: a scalable and flexible toolkit for the solution of eigenvalue problems, ACM Trans. Math. Software31 (2005), No. 3, 351–362.10.1145/1089014.1089019Search in Google Scholar

[36] M. A. Heroux, R. A. Bartlett, V. E. Howle, R. J. Hoekstra, J. J. Hu, T. G. Kolda, R. B. Lehoucq, K. R. Long, R. P. Pawlowski, E. T. Phipps, A. G. Salinger, H. K. Thornquist, R. S. Tuminaro, J. M. Willenbring, A. Williams, and K. S. Stanley, An overview of the Trilinos project, ACM Trans. Math. Softw. 31 (2005), 397–423.10.1145/1089014.1089021Search in Google Scholar

[37] M. A. Heroux et al., Trilinos Web Page, 2018, http://trilinos.org.Search in Google Scholar

[38] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, and C. S. Woodward, SUNDIALS: Suite of nonlinear and diferential/algebraic equation solvers, ACM Trans. Math. Software31 (2005), No. 3, 363–396.10.1145/1089014.1089020Search in Google Scholar

[39] T. Hoefler, C. Siebert, and A. Lumsdaine, Scalable communication protocols for dynamic sparse data exchange, ACM Sigplan Notices45 (2010), No. 5, 159–168.10.1145/1693453.1693476Search in Google Scholar

[40] P. Houston, B. Senior, and E. Süli, Sobolev regularity estimation for hp-adaptive inite element methods, In: Numerical Mathematics and Advanced Applications (Eds. F. Brezzi, A. Bufa, S. Corsaro, and A. Murli), pp. 631–656, Springer, Milan, 2003.10.1007/978-88-470-2089-4_58Search in Google Scholar

[41] P. Houston and E. Süli, A note on the design of hp-adaptive inite element methods for elliptic partial diferential equations, Comp. Meth. Appl. Mechanics Engrg. 194 (2005), No. 2, 229–243.10.1016/j.cma.2004.04.009Search in Google Scholar

[42] B. Janssen and G. Kanschat, Adaptive multilevel methods with local smoothing for H1- and Hcurl-conforming high order inite element methods, SIAM J. Sci. Comput. 33 (2011), No. 4, 2095–2114.10.1137/090778523Search in Google Scholar

[43] G. Kanschat, Multi-level methods for discontinuous Galerkin FEM on locally reined meshes, Comput. – Struct. 82 (2004), No. 28, 2437–2445.10.1016/j.compstruc.2004.04.015Search in Google Scholar

[44] G. Karypis and V. Kumar, A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput. 20 (1998), No. 1, 359–392.10.1137/S1064827595287997Search in Google Scholar

[45] M. Kronbichler and K. Kormann, A generic interface for parallel cell-based inite element operator application, Comput. Fluids63 (2012), 135–147.10.1016/j.compfluid.2012.04.012Search in Google Scholar

[46] M. Kronbichler and K. Kormann, Fast matrix-free evaluation of discontinuous Galerkin inite element operators, ACM Trans. Math. Soft. 45 (2019), No. 3, 29:1–29:40.Search in Google Scholar

[47] M. Kronbichler and W. A. Wall, A performance comparison of continuous and discontinuous Galerkin methods with fast multigrid solvers, SIAM J. Sci. Comput. 40 (2018), No. 5, A3423–A3448.10.1137/16M110455XSearch in Google Scholar

[48] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, SIAM, Philadelphia, 1998.10.1137/1.9780898719628Search in Google Scholar

[49] List of Changes for 9.2, https://www.dealii.org/developer/doxygen/deal.II/changes_between_9_1_1_and_9_2_0.html.Search in Google Scholar

[50] M. Maier, M. Bardelloni, and L. Heltai, Linear Operator – a generic, high-level expression syntax for linear algebra, Comp. Math. Appl. 72 (2016), No. 1, 1–24.10.1016/j.camwa.2016.04.024Search in Google Scholar

[51] M. Maier, M. Bardelloni, and L. Heltai, LinearOperator Benchmarks, Version 1.0.0, 2016.Search in Google Scholar

[52] C. Mavriplis, Adaptive mesh strategies for the spectral element method, Comp. Meth. Appl. Mech. Engrg. 116 (1994), No. 1, 77–86.10.1016/S0045-7825(94)80010-3Search in Google Scholar

[53] J. M. Melenk and B. I. Wohlmuth, On residual-based a posteriori error estimation in hp-FEM, Adv. Comp. Math. 15 (2001), No. 1-4, 311–331.10.1023/A:1014268310921Search in Google Scholar

[54] MUMPS: a MUltifrontal Massively Parallel Sparse Direct Solver, http://graal.ens-lyon.fr/MUMPS/.Search in Google Scholar

[55] P. Munch, K. Kormann, and M. Kronbichler, Hyper.deal: An Eicient, Matrix-Free Finite-Element Library for High-Dimensional Partial Diferential Equations, arXiv:2002.08110, Report, 2020.Search in Google Scholar

[56] Muparser: Fast Math Parser Library, http://muparser.beltoforion.de/.Search in Google Scholar

[57] OpenCASCADE: Open CASCADE Technology, 3D Modeling – Numerical Simulation, http://www.opencascade.org/.Search in Google Scholar

[58] J. Reinders, Intel Threading Building Blocks, O’Reilly, 2007.Search in Google Scholar

[59] R. Rew and G. Davis, NetCDF: an interface for scientiic data access, Computer Graphics and Applications, IEEE10 (1990), No. 4, 76–82.10.1109/38.56302Search in Google Scholar

[60] D. Ridzal and D. P. Kouri, Rapid Optimization Library, Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), Report, 2014.Search in Google Scholar

[61] A. Sartori, N. Giuliani, M. Bardelloni, and L. Heltai, deal2lkit: A toolkit library for high performance programming in deal.II, SoftwareX7 (2018), 318–327.10.1016/j.softx.2018.09.004Search in Google Scholar

[62] T. Schulze, A. Gessler, K. Kulling, D. Nadlinger, J. Klein, M. Sibly, and M. Gubisch, Open asset import library (assimp), Comp. Software (2012), https://github.com/assimp/assimp.Search in Google Scholar

[63] SymEngine: Fast Symbolic Manipulation Library, Written in C++, https://github.com/symengine/symengine, http://sympy.org/.Search in Google Scholar

[64] The HDF Group, Hierarchical Data Format, Version 5, 1997-2018, http://www.hdfgroup.org/HDF5/.Search in Google Scholar

[65] B. Turcksin, M. Kronbichler, and W. Bangerth, WorkStream – a design pattern for multicore-enabled inite element computations, ACM Trans. Math. Software43 (2016), No. 1, 2/1–2/29.10.1145/2851488Search in Google Scholar

[66] A. Walther and A. Griewank, Getting started with ADOL-C, In: Combinatorial Scientiic Computing (Eds. U. Naumann and O.Schenk), Chapman-Hall CRC Computational Science, pp. 181–202, 2012.10.1201/b11644-8Search in Google Scholar

Received: 2020-06-11
Revised: 2020-07-02
Accepted: 2020-07-02
Published Online: 2020-07-25
Published in Print: 2020-09-25

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 20.4.2024 from https://www.degruyter.com/document/doi/10.1515/jnma-2020-0043/html
Scroll to top button