Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Numerical methods for edge and scrape-off layer blob and turbulence simulations. Homepage:

License

NotificationsYou must be signed in to change notification settings

feltor-dev/feltor

Repository files navigation

Visit our projectHomepage fordocumentations, user-guides, examples and more!

3dsimulation

FELTOR (Full-F ELectromagnetic code in TORoidal geometry) is a modular scientific software package used for:

  • Physics: study fluid models for magnetised (fusion) plasmas in one, two and three dimensions

  • Numerics: develop and study numerical methods for these models in particular novel discontinuous Galerkin methods and structured grid generators

  • High performance computing: investigate parallel peformance, binary reproducibility and accuracy of the above algorithms on modern hardware architectures.

FELTOR applications are platform independent and run on a large variety of hardware from laptop CPUs to GPUs to high performance compute clusters.

DOILicense: MITGitHub release (latest by date)

1. Quick start guide

This guide discusses how to setup, build, test and benchmark FELTOR on a given system. Please read it before you proceed to theuser guide to learn how to use the library in your own programs.

System Setup using CMake

The first step is to clone and configure the FELTOR repository from GitHub.

git clone https://www.github.com/feltor-dev/feltorcd feltorcmake --preset cpu# or gpu, omp, mpi-cpu, mpi-gpu, mpi-omp

You may need to install the external dependencieslibnetcdf-dev,liblapack-dev andlibboost-dev (andlibglfw3-dev for OpenGL output andlibopenmpi-dev for MPI support) from your system package manager.

On Windows you can use the built-in git client and cmake integration of Visual Studio. It is easiest to use the built-in vcpkg manager to install thenetcdf-c,glfw3,lapack andboost-headers dependencies.

There are 6 presets targeting 6 different hardware architectures. All require the C++-17 standard. Each presetX creates its binary directorybuild/X.

Table 1. System requirements
CMake PresetRequirementsDescription

cpu

gcc >= 9 or msvc >= 19 or icc >= 19.0 or clang >= 19

Single core CPU, no parallelization; support for AVX and FMA instruction set is recommended

omp

OpenMP >= 2

Multi-core CPU parallelisation, set OMP_NUM_THREADS environment variable to set number of OpenMP threads to run.

gpu

nvcc >= 11.0

Parallel computation on a single NVidia GPU

mpi-cpu

MPI >= 3

Distributed memory system for pure MPI parallelisation

mpi-omp

MPI >= 3, OpenMP >= 2

Hybrid MPI + OpenMP parallelisation. In this configuration you may want to investigate how OpenMP threads map to CPU cores.

mpi-gpu

MPI >= 3, nvcc >= 11.0

Hybrid MPI + GPU parallelisation. Each MPI thread targets one GPU. The MPI implementation should ideally be CUDA aware.

Our GPU backend uses theNvidia-CUDA programmingenvironment and in order to compile and run a program for a GPU a userneeds the nvcc compiler and a NVidiaGPU. However, we explicitly note here that due to the modular design ofour software a user does not have to possess a GPU nor the nvcccompiler. The CPU version of the backend is equally valid and providesthe same functionality. Analogously, an MPI installation is only required if the user targetsa distributed memory system.

Available targets

The Feltor projects defines a host of CMake targets that can bebuilt after configuration. To build everything run

cmake --build build/cpu -j 4# Replace "cpu" with the preset of your choice; here and in the following.# -j 4 activates parallel compilation with 4 threads.

The Feltor CMake targets are organised into three categories: tests, benchmarks and production projects. Thesecan be targeted individually using

cmake --build build/cpu --target dg_tests -j 4# Compile all tests of the dg librarycmake --build build/cpu --target dg_benchmarks -j 4# Compile all benchmarks of the dg librarycmake --build build/cpu --target feltor_projects -j 4# Compile all production projects of feltor

The tests and benchmarks will be built inbuild/X/inc and the projects inbuild/X/src, whereX is the preset in use. The location of the executables inbuild/X exactly mirror the folder structure of the original C++ files infeltor, for examplefeltor/inc/dg/blas_b.cpp compiles tobuild/X/inc/dg/blas_b.The tests can be run using

ctest --test-dir build/cpu

Lastly, one can also target individual programs. All programs in the dg library start with the prefixdg_ followed by the component namebackend,topology,geometries,file ormatrix followed by the program name without suffix. The feltor project targets follow the naming schemeproject_target whereproject is the name of the folder in thesrc directory andtarget is the program name without suffix. Again, the output name in the binary directory follows the original folder structure and program name. For example:

cmake --build build/cpu --target dg_blas_b./build/cpu/inc/dg/blas_b# Compile and run the benchmark program feltor/inc/dg/blas_b.cppcmake --build build/cpu --target dg_topology_derivatives_t./build/cpu/inc/dg/topology/derivatives_t# Compile and run the test program feltor/inc/dg/topology/derivatives_t.cppcmake --build build/cpu --target feltor_feltor./build/cpu/src/feltor/feltor# Compile and run the 3d feltor code in feltor/src/feltor/feltor.cpp

Again, remember to replacecpu with the preset of your choice and mind the various options when running parallel programs, e.g.

cmake --preset gpucmake --build build/gpu --target dg_blas_b./build/gpu/inc/dg/blas_b# Compile and run the benchmark program feltor/inc/dg/blas_b.cpp for GPUcmake --preset ompcmake --build build/omp --target dg_blas_bexport OMP_NUM_THREADS=4./build/gpu/inc/dg/blas_b# Compile and run the benchmark program feltor/inc/dg/blas_b.cpp for OpenMPcmake --preset mpi-cpucmake --build build/mpi-cpu --target feltor_feltormpirun -n 4 ./build/mpi-cpu/src/feltor/feltor# Compile and run the 3d feltor code in feltor/src/feltor/feltor.cpp for pure MPI using 4 MPI threads

Using FELTOR’s dg library in CMake

FELTOR contains a library called thedg-library (from discontinuous Galerkin). To integrate FELTOR’s dg library in your own project via cmake currently the only option is to add it as a submodule i.e. either (i) useFetchContent directly or (ii) use the cmake package managerCPM (our recommendation) or (iii) add feltor as a git submodule and useadd_subdirectory in yourCMakeLists.txt. We here show the CPM version. To get started follow the CPM quick start guide to setup the filecmake/CPM.cmake. It is also highly recommended to set theCPM_SOURCE_CACHE environment variable.

CMake’s install rules andfind_package currently does not work well with targets that can be compiled for various languages (seethis issue)

The available library targets in cmake are of the formatfeltor::dg::component, wherecomponent is one of the following:

Table 2. Feltor’s dg library targetsfeltor::dg::component
componentCorresponding HeaderDescription

dg

dg/algorithm.h

Depends oncccl andvectorclass (loaded viaCPMAddPackage)

geometries

dg/geometries/geometries.h

Depends onfeltor::dg::file::json

matrix

dg/matrix/matrix.h

Depends onliblapack-dev andlibboost-dev

file

dg/file/file.h

Depends onfeltor::dg::file::json andfeltor::dg::file::netcdf

file::json

dg/file/json_utilities.h

Depends on eithernlohmann_json >= 3.11 (default) orjsoncpp >= 1.9.5 (settingFELTOR_FILE_WITH_JSONCPP ON) viaCPMAddPackage

file::netcdf

dg/file/nc_utilities.h

Depends onlibnetcdf-dev.

As noted before you may need to install the external dependencieslibnetcdf-dev,liblapack-dev andlibboost-dev from your system package manager (or use e.g. the vcpkg manager to installnetcdf-c,lapack andboost-headers). Note that you can set the optionsFELTOR_DG_WITH_MATRIX OFF andFELTOR_FILE_WITH_NETCDF OFF to avoid having to install netcdf, lapack or boost.

Furthermore, since feltor’s dg library depends on cccl, we inherit their optionCCCL_THRUST_DEVICE_SYSTEM, which can be eitherCPP,OMP orCUDA. Since with CUDA a new language must be enabled (which can only be done once in a cmake project) we must add this to the cmake file:

CMakeLists.txt
cmake_minimum_required(VERSION 3.26)project( myProjectVERSION         1.0.0    LANGUAGES       CXX)# We need to enable CUDA language if the user wants itif(CCCL_THRUST_DEVICE_SYSTEMSTREQUAL"CUDA"OR CCCL_THRUST_DEVICE_SYSTEMSTREQUAL"")    enable_language(CUDA)    set_source_files_properties(main.cpp PROPERTIESLANGUAGE CUDA)endif()include(cmake/CPM)CPMAddPackage(NAME feltor    GITHUB_REPOSITORY"feltor-dev/feltor"VERSION 8.2SYSTEMONEXCLUDE_FROM_ALLONOPTIONS"FELTOR_DG_WITH_MATRIX OFF""FELTOR_FILE_WITH_NETCDF OFF")add_executable(main main.cpp)# The base dg library header "dg/algorithm.h"target_link_libraries( mainPRIVATE feltor::dg::dg)

Notethat the dg library isheader-only, which means that you just have toinclude the relevant header(s) and you’re good to go. For example in thefollowing program we compute the square L2 norm of afunction:

main.cpp
#include<iostream>//include the basic dg-library#include"dg/algorithm.h"doublefunction(double x,double y){returnexp(x)*exp(y);}intmain(){//create a 2d discretization of [0,2]x[0,2] with 3 polynomial coefficients    dg::CartesianGrid2dg2d(0,2,0,2,3,20,20);//discretize a function on this gridconst dg::DVec x =dg::evaluate( function, g2d);//create the volume elementconst dg::DVec vol2d =dg::create::volume( g2d);//compute the square L2 norm on the devicedouble norm =dg::blas2::dot( x, vol2d, x);// norm is now: (exp(4)-exp(0))^2/4    std::cout << norm <<std::endl;return0;}

To compile and run this code for a GPU use

cmake -Bbuild/gpu -DCCCL_THRUST_DEVICE_SYTEM="CUDA" -DCMAKE_CUDA_ARCHITECTURES="native" -DCMAKE_CUDA_FLAGS="-march=native -O3"cmake --build build/gpu./build/gpu/main

Or if you want to use OpenMP and gcc instead of CUDA for the devicefunctions you can also use

cmake -Bbuild/omp -DCCCL_THRUST_DEVICE_SYTEM="OMP" -DCMAKE_CXX_FLAGS="-march=native -O3"cmake --build build/ompexport OMP_NUM_THREADS=4./build/omp/main

If you do not want any parallelization, you can use a single thread version

cmake -Bbuild/omp -DCCCL_THRUST_DEVICE_SYTEM="CPP" -DCMAKE_CXX_FLAGS="-march=native -O3"cmake --build build/cpu./build/cpu/main

If you want to use mpi, just include the MPI header before any otherFELTOR header and use our convenient typedefs like so:

main.cpp
#include<iostream>#ifdef WITH_MPI//activate MPI in FELTOR#include"mpi.h"#endif#include"dg/algorithm.h"doublefunction(double x,double y){returnexp(x)*exp(y);}intmain(int argc,char* argv[]){#ifdef WITH_MPI//init MPI and create a 2d Cartesian Communicator assuming 4 MPI threadsMPI_Init( &argc, &argv);int periods[2] = {true,true}, np[2] = {2,2};    MPI_Comm comm;MPI_Cart_create( MPI_COMM_WORLD,2, np, periods,true, &comm);#endif//create a 2d discretization of [0,2]x[0,2] with 3 polynomial coefficients    dg::CartesianMPIGrid2dg2d(0,2,0,2,3,20,20#ifdef WITH_MPI    , comm#endif    );//discretize a function on this gridconst dg::x::DVec x =dg::evaluate( function, g2d);//create the volume elementconst dg::x::DVec vol2d =dg::create::volume( g2d);//compute the square L2 normdouble norm =dg::blas2::dot( x, vol2d, x);//on every thread norm is now: (exp(4)-exp(0))^2/4#ifdef WITH_MPI//be a good MPI citizen and clean upMPI_Finalize();#endifreturn0;}

The CMake file needs to be modified like

CMakeLists.txt
option(MAIN_WITH_MPI"Compile main with MPI parallelisation"OFF)if(MAIN_WITH_MPI)    target_link_libraries(mainPRIVATE MPI::MPI_CXX)    target_compile_definitions(mainPRIVATE WITH_MPI)endif()

Compile e.g. for a hybrid MPI + OpenMP hardware platform with

cmake -Bbuild/mpi-omp -DCCCL_THRUST_DEVICE_SYTEM="OMP" -DCMAKE_CXX_FLAGS="-march=native -O3" -DMAIN_WITH_MPI=ONcmake --build build/mpi-ompexport OMP_NUM_THREADS=2mpirun -n 4 ./build/mpi-omp/main

This will run 4 MPI threads with 2 OpenMP threads each.

Note the striking similarity to the previous program. Especially theline calling the dot function did not change at all. The compilerchooses the correct implementation for you! This is a first example ofplatform independent code.

Using Makefiles (Deprecated)

Open a terminal and clone the repository into any folder you like

git clone https://www.github.com/feltor-dev/feltor

You also need to clonecccl distributed under theApache-2.0 license. Also, we need Agner Fog’svcl library (Apache 2.0). So again in a folder of your choice

git clone https://www.github.com/nvidia/ccclgit clone https://www.github.com/vectorclass/version2 vcl

Our code only depends on external libraries that are themselves openlyavailable.If version2 of the vectorclass library does not work for you, you can also try version1.

Running a FELTOR test or benchmark program

In order to compile one of the many test and benchmark codesinside the FELTOR library you need to tellthe FELTOR configuration where the external libraries are located onyour computer. The default way to do this is to go into yourHOMEdirectory, make an include directory and link the paths in thisdirectory

cd~mkdir includecd includeln -s path/to/cccl/thrust/thrust# Yes, thrust is there twice!ln -s path/to/cccl/cub/cubln -s path/to/cccl/libcudacxx/include/cudaln -s path/to/cccl/libcudacxx/include/nvln -s path/to/vcl

If you do not like this, you can also set the include paths in your own config file asdescribedhere.

Now let us compile the first benchmark program.

cd path/to/feltor/inc/dgmake blas_b device=cpu#(for a single thread CPU version)#ormake blas_b device=omp#(for an OpenMP version)#ormake blas_b device=gpu#(if you have a GPU and nvcc )

Run the code with

./blas_b

and when prompted for input vector sizes type for example3 100 100 10which makes a grid with 3 polynomial coefficients, 100 cells in x, 100cells in y and 10 in z. If you compiled for OpenMP, you can set thenumber of threads with e.g.export OMP_NUM_THREADS=4.

This is abenchmark program to benchmark various elemental functions the libraryis built on. Go ahead and vary the input parameters and see how yourhardware performs. You can compile and run any other program that endsin_t.cu (test programs) or_b.cu (benchmark programs) infeltor/inc/dg in this way.

Now, let us test the mpi setup

You can of course skip this if youdon’t have mpi installed on your computer. If you intend to use theMPI backend, an implementation library of the mpi standard is required.Per defaultmpic++ is used for compilation.

cd path/to/feltor/inc/dgmake blas_mpib device=cpu# (for MPI+CPU)# ormake blas_mpib device=omp# (for MPI+OpenMP)# ormake blas_mpib device=gpu# (for MPI+GPU, requires CUDA-aware MPI installation)

Run the code with$ mpirun -n '# of procs' ./blas_mpib then tell howmany process you want to use in the x-, y- and z- direction, forexample:2 2 1 (i.e. 2 procs in x, 2 procs in y and 1 in z; totalnumber of procs is 4) when prompted for input vector sizes type forexample3 100 100 10 (number of cells divided by number of procs mustbe an integer number). If you compiled for MPI+OpenMP, you can set thenumber of OpenMP threads with e.g.export OMP_NUM_THREADS=2.

Running a FELTOR simulation

Now, we want to compile and run a simulation program. To this end, we have todownload and install some additional libraries for I/O-operations.

First, we need to install jsoncpp (distributed under the MIT License),which on linux is available aslibjsoncpp-dev through the package managment system.For a manual build check the instructions onJsonCpp.

# You may have to manually link the include pathcd~/includeln -s /usr/include/jsoncpp/json

For data output we use theNetCDF-C library under anMIT - like license (we use the netcdf-4 file format).The underlyingHDF5library also uses a very permissive license.Both can be installed easily on Linux through thelibnetcdf-dev andlibhdf5-dev packages.For a manual build follow the build instructions in thenetcdf-documentation.Note that by default we use the serial netcdf and hdf5 libraries alson in the mpiversions of applications.

Some desktop applications in FELTOR use thedraw library (developed by usalso under MIT), which depends onglfw3, an OpenGL development library under aBSD-like license. There is alibglfw3-dev package for convenient installation. Again, linkpath/to/draw in theinclude folder.

If you are on a HPC cluster, you may need to set INCLUDE and LIB variables manually.For details on how FELTOR’s Makefiles are configured please see theconfig file. There are also examples of some existing Makefiles in the same folder.

We are now ready to compile and run a simulation program

cd path/to/feltor/src/toefl# or any other project in the src foldermake toefl device=gpu# (compile for gpu, cpu or omp)cp input/default.json inputfile.json# create an inputfile./toefl inputfile.json# (behold a live simulation with glfw output on screen)# ormake toefl_hpc device=gpu# (compile for gpu, cpu or omp)cp input/default_hpc.json inputfile_hpc.json# create an inputfile./toefl_hpc inputfile_hpc.json outputfile.nc# (a single node simulation with output stored in a file)# ormake toefl_mpi device=omp# (compile for gpu, cpu or omp)export OMP_NUM_THREADS=2# (set OpenMP thread number to 1 for pure MPI)echo 2 2| mpirun -n 4 ./toefl_mpi inputfile_hpc.json outputfile.nc# (a multi node simulation with now in total 8 threads with output stored in a file)# The mpi program will wait for you to type the number of processes in x and y direction before# running. That is why the echo is there.

Default input files are located inpath/to/feltor/src/toefl/input. Allthree programs solve the same equations. The technical documentation onwhat equations are discretized, input/output parameters, etc. can begenerated as a pdf withmake doc in thepath/to/feltor/src/toefldirectory.

2. Documentation

Thedocumentationof the dg library was generated withDoxygen. You can generate a localversion directly from source code. This depends on thedoxygen,libjs-mathjax,graphviz anddoxygen-awesome packages. Typemake doc inthe folderpath/to/feltor/doc and openindex.html (a symbolic linktodg/html/modules.html) with your favorite browser.Finally, also note the documentations ofthrust.

We maintain tex files in every src folder fortechnical documentation, which can be compiled using pdflatex withmake doc in the respective src folder.

3. Authors, Acknowledgements, Contributions

FELTOR has been developed by Matthias Wiesenberger and Markus Held. Please see theAcknowledgements section on our homepagefor a full list of contributors and funding.Contribution guidelines can be found in theCONTRIBUTING file.

License

This project is licensed under the MIT license - seeLICENSE for details.

Packages

No packages published

Contributors10


[8]ページ先頭

©2009-2026 Movatter.jp