Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Learning in infinite dimension with neural operators.

License

NotificationsYou must be signed in to change notification settings

neuraloperator/neuraloperator

PyPI

NeuralOperator: Learning in Infinite Dimensions

NeuralOperator is a comprehensive PyTorch library for learning neural operators,containing the official implementation of Fourier Neural Operators and other neural operator architectures.

NeuralOperator is part of the PyTorch Ecosystem, check the PyTorchannouncement!

Unlike regular neural networks, neural operators enable learning mapping between function spaces,and this library provides all of the tools to do so on your own data. Neural operators areresolution invariant, so your trained operator can be applied on data of any resolution.

Checkout thedocumentation and ourpractical guide for more information!

Installation

Just clone the repository and install locally (in editable mode so changes in the code areimmediately reflected without having to reinstall):

git clone https://github.com/NeuralOperator/neuraloperatorcd neuraloperatorpip install -e .pip install -r requirements.txt

You can also pip install the most recent stable release of the libraryonPyPI:

pip install neuraloperator

Quickstart

After you have installed the library, you can start training operators seamlessly:

fromneuralop.modelsimportFNOoperator=FNO(n_modes=(64,64),hidden_channels=64,in_channels=2,out_channels=1)

Tensorization is also available: you can improve the previous modelsby simply using a Tucker Tensor FNO with fewer parameters:

fromneuralop.modelsimportTFNOoperator=TFNO(n_modes=(64,64),hidden_channels=64,in_channels=2,out_channels=1,factorization='tucker',implementation='factorized',rank=0.1)

This will use a Tucker factorization of the weights. The forward passwill be efficient by contracting directly the inputs with the factorsof the decomposition. The Fourier layers will have 10% of the parametersof an equivalent, dense Fourier Neural Operator!

To use W&B logging features, simply create a file inneuraloperator/configcalledwandb_api_key.txt and paste your W&B API key there.

Contributing

NeuralOperator is 100% open-source, and we welcome contributions from the community!

Our mission for NeuralOperator is to provide access to well-documented, robust implementations ofneural operator methods from foundations to the cutting edge, including new architectures, meta-algorithms, training methods and benchmark datasets.We are also interested in integrating interactive examples that showcase operatorlearning in action on small sample problems.

If your work provides one of the above, we would be thrilled to integrate it into the library.Otherwise, if your work simply relies on a version of the NeuralOperator codebase, we recommendpublishing your code in a separate repository.

If you spot a bug or would like to see a new feature,please report it on ourissue trackeror open aPull Request.

For detailed development setup, testing, and contribution guidelines, please refer to ourContributing Guide.

Code of Conduct

All participants are expected to uphold theCode of Conduct to ensure a friendly and welcoming environment for everyone.

Citing NeuralOperator

If you use NeuralOperator in an academic paper, please cite[1]:

@article{kossaifi2025librarylearningneuraloperators,   author    = {Jean Kossaifi and                  Nikola Kovachki and                  Zongyi Li and                  David Pitt and                  Miguel Liu-Schiaffini and                  Valentin Duruisseaux and                  Robert Joseph George and                  Boris Bonev and                  Kamyar Azizzadenesheli and                  Julius Berner and                  Anima Anandkumar},   title     = {A Library for Learning Neural Operators},   journal   = {arXiv preprint arXiv:2412.10354},   year      = {2025},}

and consider citing[2],[3],[4]:

@article{duruisseaux2025guide,   author    = {Valentin Duruisseaux and                  Jean Kossaifi and                  Anima Anandkumar},   title     = {Fourier Neural Operators Explained: A Practical Perspective},   journal   = {arXiv preprint arXiv:2512.01421},   year      = {2025},}@article{kovachki2023neuraloperator,   author    = {Nikola Kovachki and                  Zongyi Li and                  Burigede Liu and                  Kamyar Azizzadenesheli and                  Kaushik Bhattacharya and                  Andrew Stuart and                  Anima Anandkumar},   title     = {Neural Operator: Learning Maps Between Function Spaces with Applications to PDEs},   journal   = {JMLR},   volume    = {24},   number    = {1},   articleno = {89},   numpages  = {97},   year      = {2023},}@article{berner2025principled,   author    = {Julius Berner and                  Miguel Liu-Schiaffini and                  Jean Kossaifi and                  Valentin Duruisseaux and                  Boris Bonev and                  Kamyar Azizzadenesheli and                  Anima Anandkumar},   title     = {Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning},   journal   = {arXiv preprint arXiv:2506.10973},   year      = {2025},}
[1]Kossaifi, J., Kovachki, N., Li, Z., Pitt, D., Liu-Schiaffini, M., Duruisseaux, V., George, R., Bonev, B., Azizzadenesheli, K., Berner, J., and Anandkumar, A., "A Library for Learning Neural Operators", 2025.https://arxiv.org/abs/2412.10354.
[2]Duruisseaux, V., Kossaifi, J., and Anandkumar, A., "Fourier Neural Operators Explained: A Practical Perspective", 2025.https://arxiv.org/abs/2512.01421.
[3]Kovachki, N., Li, Z., Liu, B., Azizzadenesheli, K., Bhattacharya, K., Stuart, A., and Anandkumar, A., “Neural Operator: Learning Maps Between Function Spaces with Applications to PDEs”, JMLR, 24(1):89, 2023.https://arxiv.org/abs/2108.08481.
[4]Berner, J., Liu-Schiaffini, M., Kossaifi, J., Duruisseaux, V., Bonev, B., Azizzadenesheli, K., and Anandkumar, A., "Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning", 2025.https://arxiv.org/abs/2506.10973.

[8]ページ先頭

©2009-2025 Movatter.jp