Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University

Monday, May 5: arXiv will be READ ONLY at 9:00AM EST for approximately 30 minutes. We apologize for any inconvenience.

We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>cs> arXiv:2304.04234
arXiv logo
Cornell University Logo

Computer Science > Machine Learning

arXiv:2304.04234 (cs)
[Submitted on 9 Apr 2023 (v1), last revised 9 Nov 2023 (this version, v3)]

Title:Variational operator learning: A unified paradigm marrying training neural operators and solving partial differential equations

View PDF
Abstract:Neural operators as novel neural architectures for fast approximating solution operators of partial differential equations (PDEs), have shown considerable promise for future scientific computing. However, the mainstream of training neural operators is still data-driven, which needs an expensive ground-truth dataset from various sources (e.g., solving PDEs' samples with the conventional solvers, real-world experiments) in addition to training stage costs. From a computational perspective, marrying operator learning and specific domain knowledge to solve PDEs is an essential step in reducing dataset costs and label-free learning. We propose a novel paradigm that provides a unified framework of training neural operators and solving PDEs with the variational form, which we refer to as the variational operator learning (VOL). Ritz and Galerkin approach with finite element discretization are developed for VOL to achieve matrix-free approximation of system functional and residual, then direct minimization and iterative update are proposed as two optimization strategies for VOL. Various types of experiments based on reasonable benchmarks about variable heat source, Darcy flow, and variable stiffness elasticity are conducted to demonstrate the effectiveness of VOL. With a label-free training set and a 5-label-only shift set, VOL learns solution operators with its test errors decreasing in a power law with respect to the amount of unlabeled data. To the best of the authors' knowledge, this is the first study that integrates the perspectives of the weak form and efficient iterative methods for solving sparse linear systems into the end-to-end operator learning task.
Comments:This version mainly improves the quality of the bitmaps in the results compared to the previous version
Subjects:Machine Learning (cs.LG); Numerical Analysis (math.NA)
Cite as:arXiv:2304.04234 [cs.LG]
 (orarXiv:2304.04234v3 [cs.LG] for this version)
 https://doi.org/10.48550/arXiv.2304.04234
arXiv-issued DOI via DataCite
Related DOI:https://doi.org/10.1016/j.jmps.2024.105714
DOI(s) linking to related resources

Submission history

From: Tengfei Xu [view email]
[v1] Sun, 9 Apr 2023 13:20:19 UTC (10,796 KB)
[v2] Thu, 12 Oct 2023 09:20:00 UTC (9,367 KB)
[v3] Thu, 9 Nov 2023 10:02:20 UTC (27,433 KB)
Full-text links:

Access Paper:

  • View PDF
  • TeX Source
  • Other Formats
Current browse context:
cs.LG
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp