Computer Science > Machine Learning
arXiv:2207.12020 (cs)
[Submitted on 25 Jul 2022 (v1), last revised 26 Dec 2022 (this version, v2)]
Title:Domain-invariant Feature Exploration for Domain Generalization
View a PDF of the paper titled Domain-invariant Feature Exploration for Domain Generalization, by Wang Lu and 4 other authors
View PDFAbstract:Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
Comments: | Accepted by Transactions on Machine Learning Research (TMLR) 2022; 20 pages; code:this https URL |
Subjects: | Machine Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV) |
Cite as: | arXiv:2207.12020 [cs.LG] |
(orarXiv:2207.12020v2 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.2207.12020 arXiv-issued DOI via DataCite |
Submission history
From: Jindong Wang [view email][v1] Mon, 25 Jul 2022 09:55:55 UTC (3,277 KB)
[v2] Mon, 26 Dec 2022 14:07:16 UTC (3,277 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled Domain-invariant Feature Exploration for Domain Generalization, by Wang Lu and 4 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.