Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Sep 28, 2024. It is now read-only.

DeepONets, (Fourier) Neural Operators, Physics-Informed Neural Operators, and more in Julia

License

NotificationsYou must be signed in to change notification settings

SciML/FluxNeuralOperators.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Join the chat at https://julialang.zulipchat.com #sciml-bridgedGlobal Docs

codecovBuild StatusBuild status

ColPrac: Contributor's Guide on Collaborative Practices for Community PackagesSciML Code Style

Warning

This package isn't being maintained any longer. Checkout the new version ofNeuralOperators.jlbuilt on top of Lux. While certain features might be missing in the new version, all implemented functionalities have been reworkedto ensure significantly better performance. Please direct any questions regarding Neural Operators to that repository.

Ground TruthInferenced

The demonstration showing above is Navier-Stokes equation learned by theMarkovNeuralOperator with only one time step information.Example can be found inexample/FlowOverCircle.

Abstract

Neural operator is a novel deep learning architecture.It learns a operator, which is a mapping between infinite-dimensional function spaces.It can be used to resolvepartial differential equations (PDE).Instead of solving by finite element method, a PDE problem can be resolved by training a neural network to learn an operator mappingfrom infinite-dimensional space (u, t) to infinite-dimensional space f(u, t).Neural operator learns a continuous function between two continuous function spaces.The kernel can be trained on different geometry, which is learned from a graph.

Fourier neural operator learns a neural operator with Dirichlet kernel to form a Fourier transformation.It performs Fourier transformation across infinite-dimensional function spaces and learns better than neural operator.

Markov neural operator learns a neural operator with Fourier operators.With only one time step information of learning, it can predict the following few steps with low lossby linking the operators into a Markov chain.

DeepONet operator (Deep Operator Network) learns a neural operator with the help of two sub-neural net structures described as the branch and the trunk network.The branch network is fed the initial conditions data, whereas the trunk is fed with the locations where the target(output) is evaluated from the corresponding initial conditions.It is important that the output size of the branch and trunk subnets is same so that a dot product can be performed between them.

Usage

Fourier Neural Operator

model=Chain(# lift (d + 1)-dimensional vector field to n-dimensional vector field# here, d == 1 and n == 64Dense(2,64),# map each hidden representation to the next by integral kernel operatorOperatorKernel(64=>64, (16,), FourierTransform, gelu),OperatorKernel(64=>64, (16,), FourierTransform, gelu),OperatorKernel(64=>64, (16,), FourierTransform, gelu),OperatorKernel(64=>64, (16,), FourierTransform),# project back to the scalar field of interest spaceDense(64,128, gelu),Dense(128,1))

Or one can just call:

model=FourierNeuralOperator(ch= (2,64,64,64,64,64,128,1),                              modes= (16,),                              σ= gelu)

And then train as a Flux model.

loss(𝐱, 𝐲)=l₂loss(model(𝐱), 𝐲)opt= Flux.Optimiser(WeightDecay(1.0f-4), Flux.Adam(1.0f-3))Flux.@epochs50 Flux.train!(loss,params(model), data, opt)

DeepONet

# tuple of Ints for branch net architecture and then for trunk net,# followed by activations for branch and trunk respectivelymodel=DeepONet((32,64,72), (24,64,72), σ, tanh)

Or specify branch and trunk as separateChain from Flux and pass toDeepONet

branch=Chain(Dense(32,64, σ),Dense(64,72, σ))trunk=Chain(Dense(24,64, tanh),Dense(64,72, tanh))model=DeepONet(branch, trunk)

You can again specify loss, optimization and training parameters just as you would for a simple neural network with Flux.

loss(xtrain, ytrain, sensor)= Flux.Losses.mse(model(xtrain, sensor), ytrain)evalcb()=@show(loss(xval, yval, grid))learning_rate=0.001opt=Adam(learning_rate)parameters=params(model)Flux.@epochs400 Flux.train!(loss, parameters, [(xtrain, ytrain, grid)], opt, cb= evalcb)

Examples

PDE training examples are provided inexample folder.

One-dimensional Fourier Neural Operator

Burgers' equation

DeepONet implementation for solving Burgers' equation

Burgers' equation

Two-dimensional Fourier Neural Operator

Double Pendulum

Markov Neural Operator

Time dependent Navier-Stokes equation

Super Resolution with MNO

Super resolution on time dependent Navier-Stokes equation

References


[8]ページ先頭

©2009-2025 Movatter.jp