- Notifications
You must be signed in to change notification settings - Fork31
PyTorch building blocks for the OLMo ecosystem
License
allenai/OLMo-core
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
First installPyTorch according to the instructions specific to your operating system and hardware.
For development, we recommend installing from source:
git clone https://github.com/allenai/OLMo-core.gitcd OLMo-corepip install -e .[all]
Or you can install from PyPI with:
pip install ai2-olmo-core
There are a number of optional dependencies that must be installed to use certain functionality as well, including:
- flash-attn andring-flash-attn for intra-document masking and context parallelism.
- Liger-Kernel for a low-memory "fused-linear" loss implementation.
- torchao for float8 training.
- grouped_gemm for dropless mixture-of-experts (MoE) models. You may need to compile from source untilPR #21 is released (post v0.1.6).
The publishedDocker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters.But there are several things to keep in mind if you intend to use these images:
- They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
- They may not work on your own cluster if you have different hardware or driver/CUDA versions.
If the published images do not work for your use-case for any of the above reasons, you could adapt ourDockerfile to build your own images.
Official training scripts for released models can be found insrc/scripts/official/
.These scripts are meant to be launched withtorchrun
. For example:
torchrun --nproc-per-node=8 ./src/scripts/official/OLMo-2-0325-32B-train.py run01
You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
torchrun --nproc-per-node=8 ./src/scripts/train/OLMo-2-0325-32B-train.py run01 --train_module.optim.lr=6e-3
To continue annealing from a checkpoint, we use a separate script which can be launched like this:
torchrun --nproc-per-node=8 ./src/scripts/train/OLMo-2-0325-32B-anneal.py anneal_run01 https://olmo-checkpoints.org/ai2-llm/peteish32/step721901/
OLMo-2 32B pretraining follows a two-stage training procedure.In the first stage, we train on large amounts of mostly web-based data:OLMo-mix-1124.In the second stage, we train on a smaller amount of high-quality, targeted data: Dolmino-mix-0324 (releasing soon).
Stage | Model Size | Training | Checkpoint | Monitoring |
---|---|---|---|---|
stage 1 | 32B | 6T tokens | stage1-step721901-tokens6056B | comet.ml/OLMo2-32B |
stage 2 | 32B | random seed 1110, 100B tokens | stage2-ingredient1-step11921-tokens101B | comet.ml/OLMo2-32B |
random seed 2662, 100B tokens | stage2-ingredient2-step11921-tokens101B | comet.ml/OLMo2-32B | ||
random seed 2662, 300B tokens | stage2-ingredient3-step35763-tokens301B | comet.ml/OLMo2-32B | ||
Final Souped Model | main | No config, weights averaged in Python |
The table below lists the checkpoints for Stage 1 and Stage 2 of OLMo-2, along with their corresponding Hugging Face format.
Variant | OLMo Format (Stage 1) | OLMo Format (Stage 2) | Hugging Face Format |
---|---|---|---|
OLMo-2 32B | OLMo-2 32B | OLMo-2 32B | Hugging Face for the 32B variant |
Note: OLMo-2 7B and 13B models were trained usingthe old OLMo trainer. All related checkpoints, configs, and scripts for these models can be found there. While you can train 7B and 13B models with this trainer, please note that the configs and script in the old training codebase are not compatible with this repo.
You can use our Hugging Face integration to run inference on the OLMo transformers checkpoints:
fromtransformersimportAutoModelForCausalLM,AutoTokenizerolmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B")tokenizer=AutoTokenizer.from_pretrained("allenai/OLMo-2-0325-32B")message= ["Language modeling is "]inputs=tokenizer(message,return_tensors='pt',return_token_type_ids=False)# inputs = {k: v.to('cuda') for k,v in inputs.items()} # optional verifying cuda# olmo = olmo.to('cuda')response=olmo.generate(**inputs,max_new_tokens=100,do_sample=True,top_k=50,top_p=0.95)print(tokenizer.batch_decode(response,skip_special_tokens=True)[0])
Alternatively, with the Hugging Face pipeline abstraction:
fromtransformersimportpipelineolmo_pipe=pipeline("text-generation",model="allenai/OLMo-2-0325-32B")print(olmo_pipe("Language modeling is"))
olmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B",torch_dtype=torch.float16,load_in_8bit=True)# requires bitsandbytes
Additional tools for evaluating OLMo models are available at theOLMo Eval andolmes repositories.
The Python library source code is located insrc/olmo_core
. The corresponding tests are located insrc/test
. The library docs are located indocs
. You can build the docs locally withmake docs
.
Code checks:
- We use
pytest
to run tests. You can run all tests withpytest -v src/test
. You can also pointpytest
at a specific test file to run it individually. - We use
isort
andblack
for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, runmake style-check
. - We use
ruff
as our primary linter. You can run it withmake lint-check
. - We use
mypy
as our type checker. You can run it withmake type-check
.
@misc{olmo20242olmo2furious,title={{2 OLMo 2 Furious}},author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},year={2024},eprint={2501.00656},archivePrefix={arXiv},primaryClass={cs.CL},url={https://arxiv.org/abs/2501.00656},}
About
PyTorch building blocks for the OLMo ecosystem