Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

PyTorch building blocks for the OLMo ecosystem

License

NotificationsYou must be signed in to change notification settings

allenai/OLMo-core

Repository files navigation

OLMo Logo

OLMo-core

Building blocks for OLMo modeling and training

DocsExamplesPypiGitHub LicensePaper URLPlaygroundDiscord

Installation

First installPyTorch according to the instructions specific to your operating system and hardware.

For development, we recommend installing from source:

git clone https://github.com/allenai/OLMo-core.gitcd OLMo-corepip install -e .[all]

Or you can install from PyPI with:

pip install ai2-olmo-core

There are a number of optional dependencies that must be installed to use certain functionality as well, including:

The publishedDocker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters.But there are several things to keep in mind if you intend to use these images:

  • They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
  • They may not work on your own cluster if you have different hardware or driver/CUDA versions.

If the published images do not work for your use-case for any of the above reasons, you could adapt ourDockerfile to build your own images.

Official training scripts

Official training scripts for released models can be found insrc/scripts/official/.These scripts are meant to be launched withtorchrun. For example:

torchrun --nproc-per-node=8 ./src/scripts/official/OLMo-2-0325-32B-train.py run01

You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:

torchrun --nproc-per-node=8 ./src/scripts/train/OLMo-2-0325-32B-train.py run01 --train_module.optim.lr=6e-3

To continue annealing from a checkpoint, we use a separate script which can be launched like this:

torchrun --nproc-per-node=8 ./src/scripts/train/OLMo-2-0325-32B-anneal.py anneal_run01 https://olmo-checkpoints.org/ai2-llm/peteish32/step721901/

OLMo-2 Model Training

OLMo-2 32B pretraining follows a two-stage training procedure.In the first stage, we train on large amounts of mostly web-based data:OLMo-mix-1124.In the second stage, we train on a smaller amount of high-quality, targeted data: Dolmino-mix-0324 (releasing soon).

StageModel SizeTrainingCheckpointMonitoring
stage 132B6T tokensstage1-step721901-tokens6056Bcomet.ml/OLMo2-32B
stage 232Brandom seed 1110, 100B tokensstage2-ingredient1-step11921-tokens101Bcomet.ml/OLMo2-32B
random seed 2662, 100B tokensstage2-ingredient2-step11921-tokens101Bcomet.ml/OLMo2-32B
random seed 2662, 300B tokensstage2-ingredient3-step35763-tokens301Bcomet.ml/OLMo2-32B
Final Souped ModelmainNo config, weights averaged in Python

The table below lists the checkpoints for Stage 1 and Stage 2 of OLMo-2, along with their corresponding Hugging Face format.

VariantOLMo Format (Stage 1)OLMo Format (Stage 2)Hugging Face Format
OLMo-2 32BOLMo-2 32BOLMo-2 32BHugging Face for the 32B variant

Note: OLMo-2 7B and 13B models were trained usingthe old OLMo trainer. All related checkpoints, configs, and scripts for these models can be found there. While you can train 7B and 13B models with this trainer, please note that the configs and script in the old training codebase are not compatible with this repo.

Inference

You can use our Hugging Face integration to run inference on the OLMo transformers checkpoints:

fromtransformersimportAutoModelForCausalLM,AutoTokenizerolmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B")tokenizer=AutoTokenizer.from_pretrained("allenai/OLMo-2-0325-32B")message= ["Language modeling is "]inputs=tokenizer(message,return_tensors='pt',return_token_type_ids=False)# inputs = {k: v.to('cuda') for k,v in inputs.items()} # optional verifying cuda# olmo = olmo.to('cuda')response=olmo.generate(**inputs,max_new_tokens=100,do_sample=True,top_k=50,top_p=0.95)print(tokenizer.batch_decode(response,skip_special_tokens=True)[0])

Alternatively, with the Hugging Face pipeline abstraction:

fromtransformersimportpipelineolmo_pipe=pipeline("text-generation",model="allenai/OLMo-2-0325-32B")print(olmo_pipe("Language modeling is"))

Quantization

olmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B",torch_dtype=torch.float16,load_in_8bit=True)# requires bitsandbytes

Evaluation

Additional tools for evaluating OLMo models are available at theOLMo Eval andolmes repositories.

Development

The Python library source code is located insrc/olmo_core. The corresponding tests are located insrc/test. The library docs are located indocs. You can build the docs locally withmake docs.

Code checks:

  • We usepytest to run tests. You can run all tests withpytest -v src/test. You can also pointpytest at a specific test file to run it individually.
  • We useisort andblack for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, runmake style-check.
  • We useruff as our primary linter. You can run it withmake lint-check.
  • We usemypy as our type checker. You can run it withmake type-check.

Citing

@misc{olmo20242olmo2furious,title={{2 OLMo 2 Furious}},author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},year={2024},eprint={2501.00656},archivePrefix={arXiv},primaryClass={cs.CL},url={https://arxiv.org/abs/2501.00656},}

[8]ページ先頭

©2009-2025 Movatter.jp