Computer Science > Computation and Language
arXiv:2410.15661 (cs)
[Submitted on 21 Oct 2024]
Title:Scalable Data Ablation Approximations for Language Models through Modular Training and Merging
Authors:Clara Na,Ian Magnusson,Ananya Harsh Jha,Tom Sherborne,Emma Strubell,Jesse Dodge,Pradeep Dasigi
View a PDF of the paper titled Scalable Data Ablation Approximations for Language Models through Modular Training and Merging, by Clara Na and Ian Magnusson and Ananya Harsh Jha and Tom Sherborne and Emma Strubell and Jesse Dodge and Pradeep Dasigi
View PDFHTML (experimental)Abstract:Training data compositions for Large Language Models (LLMs) can significantly affect their downstream performance. However, a thorough data ablation study exploring large sets of candidate data mixtures is typically prohibitively expensive since the full effect is seen only after training the models; this can lead practitioners to settle for sub-optimal data mixtures. We propose an efficient method for approximating data ablations which trains individual models on subsets of a training corpus and reuses them across evaluations of combinations of subsets. In continued pre-training experiments, we find that, given an arbitrary evaluation set, the perplexity score of a single model trained on a candidate set of data is strongly correlated with perplexity scores of parameter averages of models trained on distinct partitions of that data. From this finding, we posit that researchers and practitioners can conduct inexpensive simulations of data ablations by maintaining a pool of models that were each trained on partitions of a large training corpus, and assessing candidate data mixtures by evaluating parameter averages of combinations of these models. This approach allows for substantial improvements in amortized training efficiency -- scaling only linearly with respect to new data -- by enabling reuse of previous training computation, opening new avenues for improving model performance through rigorous, incremental data assessment and mixing.
Comments: | EMNLP 2024. 17 pages |
Subjects: | Computation and Language (cs.CL); Machine Learning (cs.LG) |
Cite as: | arXiv:2410.15661 [cs.CL] |
(orarXiv:2410.15661v1 [cs.CL] for this version) | |
https://doi.org/10.48550/arXiv.2410.15661 arXiv-issued DOI via DataCite | |
Related DOI: | https://doi.org/10.18653/v1/2024.emnlp-main.1176 DOI(s) linking to related resources |
Full-text links:
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
- Other Formats
View a PDF of the paper titled Scalable Data Ablation Approximations for Language Models through Modular Training and Merging, by Clara Na and Ian Magnusson and Ananya Harsh Jha and Tom Sherborne and Emma Strubell and Jesse Dodge and Pradeep Dasigi
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.