- Notifications
You must be signed in to change notification settings - Fork26
POT3D: High Performance Potential Field Solver
License
predsci/POT3D
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
POT3D is a Fortran code that computes potential field solutions to approximate the solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It can be used to generate potential field source surface (PFSS), potential field current sheet (PFCS), and open field (OF) models. It has been (and continues to be) used for numerous studies of coronal structure and dynamics. The code is highly parallelized usingMPI and is GPU-accelerated using Fortran standard parallelism (do concurrent) andOpenMP Target for data movement and device selection, along with an option to use theNVIDIA cuSparse library. TheHDF5 file format is used for input/output.
POT3D
is the potential field solver for the WSA/DCHB model in the CORHEL software suite publicly hosted at theCommunity Coordinated Modeling Center (CCMC).
A version ofPOT3D
that includes GPU-acceleration with both MPI+OpenACC and MPI+OpenMP was released as part of the Standard Performance Evaluation Corporation's (SPEC) beta version of theSPEChpc(TM) 2021 benchmark suites.
Details of thePOT3D
code can be found in these publications:
- Variations in Finite Difference Potential Fields.
Caplan, R.M., Downs, C., Linker, J.A., and Mikic, Z.Ap.J. 915,1 44 (2021) - From MPI to MPI+OpenACC: Conversion of a legacy FORTRAN PCG solver for the spherical Laplace equation.
Caplan, R.M., Mikic, Z., and Linker, J.L.arXiv:1709.01126 (2017)
The includedbuild.sh
script will take a configuration file and generate a Makefile and build the code.
The folderconf
contains example configuration files for various compilers and systems.
We recommend copying the configuration file closest to your setup and then modifying it to conform to your compiler and system (such asHDF5
library paths/flags, compiler flags, etc.).
Given a configure scriptconf/my_custom_build.conf
, the build script is invoked as:
> ./build.sh ./conf/my_custom_build.conf
After building the code, you can test it is working by running./validate.sh
.
This will perform a small run case using 1 MPI rank.To perform the validation with more ranks, use./validate.sh <NP>
where<NP>
is the number of ranks requested.
The run is performed intestsuite/validation/run/
.
The result is checked against a reference solution (in/runs/validation/validation
) and a PASS/FAIL message will be displayed.
The validation run setsifprec=2
but if the code is compiled for NVIDIA GPUs without cuSparse, it will be automatically changed toifprec=1
.
POT3D uses a namelist in an input text file calledpot3d.dat
to set all parameters of a run. See the providedpot3d_input_documentation.txt
file for details on the various parameter options. For any run, an input 2D data set in HDF5 format is required for the lower radial magnetic field (Br
) boundary condition. Examples of this file are contained in theexamples
andtestsuite
folders.
To runPOT3D
, set the desired run parameters in apot3d.dat
text file, then copy or link thepot3d
executable into the same directory aspot3d.dat
and run the command:<MPI_LAUNCHER> -np <N> ./pot3d
where<N>
is the total number of MPI ranks to use (typically equal to the number of CPU cores) and<MPI_LAUNCHER>
is your MPI run command (e.g.mpiexec
,mpirun
,ibrun
,srun
, etc).
For example:mpiexec -np 1024 ./pot3d
Important!
For CPU runs, setifprec=2
in thepot3d.dat
input file.
For GPU runs, setifprec=1
in thepot3d.dat
input file, unless you build with thecuSparse
library option, in which case you should setifprec=2
.
For standard cases, one should launch the code such that the number of MPI ranks per node is equal to the number of GPUs per node
e.g.mpiexec -np <N> --ntasks-per-node 4 ./pot3d
ormpiexec -np <N> --npersocket 2 ./pot3d
If thecuSparse
library option was used to build the code, than setifprec=2
inpot3d.dat
.
If thecuSparse
library option was NOT used to build the code, it is critical to setifprec=1
for efficient performance.
To estimate how much memory (RAM) is needed for a run, compute:
memory-needed = nr*nt*np*8*15/1024/1000/1000 GB
wherenr
,nt
, andnp
are the chosen problem sizes in ther
,theta
, andphi
dimension.
Note that this estimate is when usingifprec=1
. If usingifprec=2
, the required memory is ~2x higher on the CPU, and even higher when usingcuSparse
on the GPU.
Depending on the input parameters,POT3D
can have various outputs. Typically, the three components of the potential magnetic field is output asHDF5
files. In every run, the following two text files are output:
pot3d.out
An output log showing grid information and magnetic energy diagnostics.timing.out
Time profile information of the run.
Some useful python scripts for reading and plotting the POT3D input data, and reading the output data can be found in thebin
folder.
In theexamples
folder, we provide ready-to-run examples of three use cases ofPOT3D
in the following folders:
/potential_field_source_surface
A standard PFSS run with a source surface radii of 2.5 Rsun./potential_field_current_sheet
A standard PFCS run using the outer boundary of the PFSS example as its inner boundary condition, with a domain that extends to 30 Rsun. The magnetic field solution produced is unsigned./open_field
An example of computing the "open field" model from the solar surface out to 30 Rsun using the same input surface Br as the PFSS example. The magnetic field solution produced is unsigned.
In thetestsuite
folder, we provide test cases of various sizes that can be used to validate and test the performance ofPOT3D
.
Each test case contains aninput
folder with the run input files, arun
folder used to run the test, and avalidation
folder containing the output diagnotics used to validate the test, as well as a text file namedvalidation_run_information.txt
containing information on how the validation run was computed (system, compiler, number of ranks, etc.) with performance details. Note that all tests are set to useifprec=1
only. An option to useifprec=2
will be added later.
To run a test, use the included scriptrun_test.sh
as:run_test.sh <TEST> <NP>
where<TEST>
is the test folder name and<NP>
is the number of MPI ranks to use. The test will run and then use the included scriptbin/pot3d_validate.sh
that takes twopot3d.out
files and compares their magnetic energy values in order to validate the run results.
The following is a list of the included tests, and their problem size and memory requirements:
validation
Grid size: 63x91x225 = 1.28 million cells
Memory (RAM) needed (usingifprec=1
): ~1 GBsmall
Grid size: 133x361x901 = 43.26 million cells
Memory (RAM) needed (usingifprec=1
): ~6 GBmedium
Grid size: 267x721x1801 = 346.7 million cells
Memory (RAM) needed (usingifprec=1
): ~41 GBlarge
Grid size: 535x1441x3601 = 2.78 billion cells
Memory (RAM) needed (usingifprec=1
): ~330 GB
Note that these tests willnot output the 3D magnetic field results of the run, so no extra disk space is needed.
Instead, the validation is done with the magnetic energy diagnostics in thepot3d.out
file.
About
POT3D: High Performance Potential Field Solver
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Contributors7
Uh oh!
There was an error while loading.Please reload this page.