Computer Science > Machine Learning
arXiv:2405.14908 (cs)
[Submitted on 23 May 2024 (v1), last revised 27 Jan 2025 (this version, v4)]
Title:BiMix: A Bivariate Data Mixing Law for Language Model Pretraining
View a PDF of the paper titled BiMix: A Bivariate Data Mixing Law for Language Model Pretraining, by Ce Ge and 4 other authors
View PDFHTML (experimental)Abstract:Large language models have demonstrated remarkable capabilities across various tasks, primarily attributed to the utilization of diversely sourced data. However, the impact of pretraining data composition on model performance remains poorly understood. This paper introduces $\textbf{BiMix}$, a novel bivariate data mixing law that models the joint scaling behavior of domain proportions and data volume in LLM pretraining. $\textbf{BiMix}$ provides a systematic framework for understanding and optimizing data mixtures across diverse domains. Through extensive experiments on two large-scale datasets, we demonstrate $\textbf{BiMix}$'s high accuracy in loss extrapolation (mean relative error < 0.2%) and its generalization to unseen mixtures (R${}^{2}$ > 0.97). Optimization of domain proportions yields superior model performance compared to existing methods. Furthermore, we establish entropy-based measures as efficient proxies for data mixing, offering a computationally lightweight strategy. Our work contributes both theoretical insights into data mixing dynamics and practical tools for enhancing LLM training efficiency, paving the way for more effective scaling strategies in language model development.
Comments: | Clarify details |
Subjects: | Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) |
Cite as: | arXiv:2405.14908 [cs.LG] |
(orarXiv:2405.14908v4 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.2405.14908 arXiv-issued DOI via DataCite |
Submission history
From: Ce Ge PhD [view email][v1] Thu, 23 May 2024 09:44:02 UTC (2,255 KB)
[v2] Thu, 11 Jul 2024 08:44:45 UTC (2,255 KB)
[v3] Tue, 15 Oct 2024 03:40:30 UTC (1,950 KB)
[v4] Mon, 27 Jan 2025 11:25:33 UTC (1,953 KB)
Full-text links:
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
- Other Formats
View a PDF of the paper titled BiMix: A Bivariate Data Mixing Law for Language Model Pretraining, by Ce Ge and 4 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.