- Notifications
You must be signed in to change notification settings - Fork55
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
License
cics-nd/pde-surrogate
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
PyTorch implementation forPhysics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. This is accomplished by appropriately incorporating the governing equations into the loss / likelihood functions, as demonstrated with both deterministic surrogates (convolutional encoder-decoder networks) and probabilistic surrogates (flow-based conditional generative models).
Yinhao Zhu,Nicholas Zabaras,Phaedon-Stelios Koutsourelakis,Paris Perdikaris
Codec - GRF KLE512 | Codec - Channelized | cGlow - GRF KLE100 |
---|---|---|
![]() | ![]() | ![]() |
Install dependencies inrequirements.txt and clone our repository
git clone https://github.com/cics-nd/pde-surrogate.gitcd pde-surrogate
Input dataset includes Gaussian random field (GRF) with different truncations ofN leading terms of Karhunen-Loeve expansion (KLE)GRF KLE-N,Warped GRF, andChannelized field. Corresponding output are solved with FEniCS. Note that only input samples are needed to train the physics-constrained surrogates.
Download the dataset with 64x64 grid
bash ./scripts/download_datasets.sh 64
Change64
to32
to download the dataset with 32x32 grid (mainly for probabilistic surrogate).The dataset is saved at./datasets/
.
Train aphysics-constrained surrogate with mixed residual loss without output data
python train_codec_mixed_residual.py --data grf_kle512 --ntrain 4096 --batch-size 32
- Use
--data channelized
to train the surrogate for channelized permeability fields. - Choose smaller
--batch-size
when num of training data--ntrain
is smaller, check more hyperparameters inParser
class. --cuda n
selectn
-th GPU card- Checkdarcy.py for the PDE loss and boundary loss for the Darcy flow problem.
- Checkimage_gradient.py for Sobel filter to estimate spatial gradients.
- The experiments are saved at
./experiments/codec/mixed_residual/
.
Train adata-driven surrogate with maximum likelihood, which requires output data
python train_codec_max_likelihood.py --data grf_kle512 --ntrain 4096 --batch-size 32
- You may try different
--data
,--ntrain
,--batch-size
, and many other hyperparameters, seeParser
class intrain_codec_mixed_residual.py
andtrain_codec_max_likelihood.py
. - The experiments are saved at
./experiments/codec/max_likelihood/
.
Train conditional Glow with reverse KL divergence loss without output data
python train_cglow_reverse_kl.py --beta 150 --ntrain 4096 --kle 100 --imsize 32
Tune the network structure by setting hyperparameters, e.g.
--beta
: precision parameter for the reference density--enc-blocks
: e.g.[3, 4, 4]
, a list of # layers in each dense block of encoder network--flow-blocks
: e.g.[6, 6, 6]
, a list of # steps of flow in each level of the Glow model--coupling
:'dense'
or'wide'
, the type of coupling network for affine coupling layer- Checkglow_msc.py for the multiscale conditional Glow model
- Use
--data-init
to speed up training with one minibatch of labeled data.Note that this isnot necessary. - The experiments are saved in
./experiments/cglow/reverse_kl/
.
Try more difficult case ofKLE512
over64x64
grid
python train_cglow_reverse_kl.py --beta 150 --ntrain 8192 --kle 512 --imsize 64 --lr 0.001
Also modify--enc-blocks
to be[3, 3, 3, 3]
, and--flow-blocks
to be[4, 4, 4, 4]
.You should expect much longer training time and potentially unstable training.When it is unstable, use additional--resume
to resume training from the latest checkpoint saved, or--ckpt-epoch 100
to resume from specific checkpoint, e.g. at 100 epochs. Use--data-init
also helps.
Convolution Decoder Networks VS Fully-Connected Neural Networks
python solve_conv_mixed_residual.py --data grf --kle 1024 --idx 8 --verbose
--data
: ['grf', 'channelized', 'warped_grf']--kle
: [512, 128, 1024, 2048], no need to set for 'channelized' and 'warped_grf'--idx
: 0-999, index for selecting which input to solve for 'grf', 'warped_grf', 0-512 for 'channelized'--versbose
: add this flag to show more detailed output in terminal- Results are saved at
./experiments/solver/
For solving nonlinear PDEs, add--nonlinear
flag, which will call FEniCS to solve the nonlinear Darcy flow (fenics.py as the reference for the ConvNet solution.
python solve_conv_mixed_residual.py --data grf --kle 1024 --idx 8 --nonlinear --alpha1 0.1 --alpha2 0.1
wherealpha1
andalpha2
are the coefficients in the nonlinear constitutive equation.Checkmain()
for other hyperparameters insolve_conv_mixed_residual.py
.
python solve_fc_mixed_residual.py --data grf --kle 512 --idx 8 --verbose
Same hyperparameters as the ConvNet case. Nonlinear PDE case is not investigated here.
Download the pre-trained probabilistic surrogates
bash ./scripts/download_checkpoints.sh
Then you can check useful post-processing functions, including the ones for uncertainty quantification.
python post_cglow.py
- Use
--run-dir
to specify the directory for your own runs, default is the downloaded pretrained model.
If you use this code for your research, please cite our paper.
@article{zhu2019physics, title={Physics-Constrained Deep Learning for High-dimensional Surrogate Modeling and Uncertainty Quantification without Labeled Data}, author={Yinhao Zhu and Nicholas Zabaras and Phaedon-Stelios Koutsourelakis and Paris Perdikaris}, journal={Journal of Computational Physics}, volume = "394", pages = "56 - 81", year={2019}, issn={0021-9991}, doi={https://doi.org/10.1016/j.jcp.2019.05.024}}
About
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.