Computer Science > Machine Learning
arXiv:2207.07256 (cs)
[Submitted on 15 Jul 2022 (v1), last revised 20 Aug 2022 (this version, v2)]
Title:Improving Task-free Continual Learning by Distributionally Robust Memory Evolution
View a PDF of the paper titled Improving Task-free Continual Learning by Distributionally Robust Memory Evolution, by Zhenyi Wang and 5 other authors
View PDFAbstract:Task-free continual learning (CL) aims to learn a non-stationary data stream without explicit task definitions and not forget previous knowledge. The widely adopted memory replay approach could gradually become less effective for long data streams, as the model may memorize the stored examples and overfit the memory buffer. Second, existing methods overlook the high uncertainty in the memory data distribution since there is a big gap between the memory data distribution and the distribution of all the previous data examples. To address these problems, for the first time, we propose a principled memory evolution framework to dynamically evolve the memory data distribution by making the memory buffer gradually harder to be memorized with distributionally robust optimization (DRO). We then derive a family of methods to evolve the memory buffer data in the continuous probability measure space with Wasserstein gradient flow (WGF). The proposed DRO is w.r.t the worst-case evolved memory data distribution, thus guarantees the model performance and learns significantly more robust features than existing memory-replay-based methods. Extensive experiments on existing benchmarks demonstrate the effectiveness of the proposed methods for alleviating forgetting. As a by-product of the proposed framework, our method is more robust to adversarial examples than existing task-free CL methods. Code is available on GitHub \url{this https URL}
Comments: | ICML 2022 |
Subjects: | Machine Learning (cs.LG) |
Cite as: | arXiv:2207.07256 [cs.LG] |
(orarXiv:2207.07256v2 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.2207.07256 arXiv-issued DOI via DataCite |
Submission history
From: Zhenyi Wang [view email][v1] Fri, 15 Jul 2022 02:16:09 UTC (519 KB)
[v2] Sat, 20 Aug 2022 12:37:00 UTC (520 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled Improving Task-free Continual Learning by Distributionally Robust Memory Evolution, by Zhenyi Wang and 5 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.