Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Modeling, training, eval, and inference code for OLMo

License

NotificationsYou must be signed in to change notification settings

allenai/OLMo

Repository files navigation

OLMo Logo

OLMo: Open Language Model

GitHub LicenseGitHub releasePaper URLPlaygroundDiscord

OLMo is a repository for training and using AI2's state-of-the-art open language models. It is designed by scientists, for scientists.

Installation

First, installPyTorch following the instructions specific to your operating system.

For training and fine-tuning, we recommend installing from source:

git clone https://github.com/allenai/OLMo.gitcd OLMopip install -e .[all]

You can also install from PyPI with:

pip install ai2-olmo

Pretraining

OLMo pretraining follows a two-stage training procedure.In the first stage, we train on large amounts of mostly web-based data:OLMo-mix-1124In the second stage, we train on a smaller amount of high-quality, targeted data:Dolmino-mix-1124

You can findall the checkpoints, at minimum every 1000 training steps in OLMo core and Hugging Face format:

VariantOLMo Format (Stage 1)OLMo Format (Stage 2)Hugging Face Format
OLMo-2 1BOLMo-2 1BOLMo-2 1BHugging Face for the 1B variant
OLMo-2 7BOLMo-2 7BOLMo-2 7BHugging Face for the 7B variant
OLMo-2 13BOLMo-2 13BOLMo-2 13BHugging Face for the 13B variant
OLMo-2 32BOLMo-2 32BOLMo-2 32BHugging Face for the 32B variant

Note: The 32B variant was trained on our new trainer. To train or fine-tune OLMo-2 32B, visitOLMo-core.

Steps to reproduce

To reproduce any of the training processes described below, run this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config}

For the training config, use any of the configs listed below.

If you want to override any of the settings in the training config without having to write a new config every time,you can do this:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \  --setting1=value \  --setting2=value \  --setting3.subsetting1=value

The training configs below refer to training data that gets streamed in live over HTTP.To reproduce at large scale, we recommend downloading the files locally and changing the paths to point to yourlocal file system.

To run on Mac silicon devices:

python scripts/train.py {path_to_train_config}

Example:

python scripts/train.py configs/tiny/OLMo-20M.yaml --save_overwrite

Note: You need to upgrade PyTorch to 2.5.x to run.

Stage 1

Stage 1 is the biggest stage, where we train on 4T or 5T tokens on largely web-based data.

OLMo2 1BOLMo2 7BOLMo2 13B
Number of tokens4 Trillion4 Trillion5 Trillion
Checkpointstage1-step1907359-tokens4001Bstage1-step928646-tokens3896Bstage1-step596057-tokens5001B
Training configOLMo2-1B-stage1.yamlOLMo2-7B-stage1.yamlOLMo2-13B-stage1.yaml
WandBwandb.ai/OLMo2-1Bwandb.ai/OLMo2-7Bwandb.ai/OLMo2-13B

You can find the .csv.gz files containing the training datahere.

Stage 2 for the 1B

For the 1B model, we have trained three times with different data order on 50B high quality tokens, used last checkpoint of seed 42 as final checkpoint.

CheckpointTraining configWandB
random seed 42069stage2-ingredient1-step23852-tokens51BOLMo2-1B-stage2-seed42069.yamlwandb.ai/OLMo2-1B
random seed 666stage2-ingredient2-step23852-tokens51BOLMo2-1B-stage2-seed666.yamlwandb.ai/OLMo2-1B
random seed 42 (main)stage2-ingredient3-step23852-tokens51BOLMo2-1B-stage2-seed42.yamlwandb.ai/OLMo2-1B

Stage 2 for the 7B

For the 7B model, we train three times with different data order on 50B high quality tokens, and then average ("soup") the models.

CheckpointTraining configWandB
random seed 42stage2-ingredient1-step11931-tokens50BOLMo2-7B-stage2-seed42.yamlwandb.ai/OLMo2-7B
random seed 42069stage2-ingredient2-step11931-tokens50BOLMo2-7B-stage2-seed42069.yamlwandb.ai/OLMo2-7B
random seed 666stage2-ingredient3-step11931-tokens50BOLMo2-7B-stage2-seed666.yamlwandb.ai/OLMo2-7B
final souped modelmainno config, we just averaged the weights in Python

The training configs linked here are set up to download the latest checkpoint after stage 1, and start training from there.

Stage 2 for the 13B

For the 13B model, we train three times with different data order on 100B high quality tokens, and one more timeon 300B high quality tokens. Then we average ("soup") the models.

CheckpointTraining configWandB
random seed 1110, 100Bstage2-ingredient1-step11931-tokens100BOLMo2-13B-stage2-seed1110-100B.yamlwandb.ai/OLMo2-13B
random seed 2662, 100Bstage2-ingredient2-step11931-tokens100BOLMo2-13B-stage2-seed2662-100B.yamlwandb.ai/OLMo2-13B
random seed 6209, 100Bstage2-ingredient3-step11931-tokens100BOLMo2-13B-stage2-seed6209-100B.yamlwandb.ai/OLMo2-13B
random seed 2662, 300Bstage2-ingredient4-step11931-tokens300BOLMo2-13B-stage2-seed2662-300B.yamlwandb.ai/OLMo2-13B
final souped modelmainno config, we just averaged the weights in Python

The training configs linked here are set up to download the latest checkpoints after stage 1, and start training from there.

Note: You can find all the information about the 32B in theOLMo-core repository.

Instruction tuned variants

For instruction tuned variants of these models, go to

Inference

You can use our Hugging Face integration to run inference on the OLMo Transformers checkpoints:

fromtransformersimportAutoModelForCausalLM,AutoTokenizerolmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B")tokenizer=AutoTokenizer.from_pretrained("allenai/OLMo-2-0425-1B")message= ["Language modeling is "]inputs=tokenizer(message,return_tensors='pt',return_token_type_ids=False)# optional verifying cuda# inputs = {k: v.to('cuda') for k,v in inputs.items()}# olmo = olmo.to('cuda')response=olmo.generate(**inputs,max_new_tokens=100,do_sample=True,top_k=50,top_p=0.95)print(tokenizer.batch_decode(response,skip_special_tokens=True)[0])

Alternatively, with the Hugging Face pipeline abstraction:

fromtransformersimportpipelineolmo_pipe=pipeline("text-generation",model="allenai/OLMo-2-0425-1B")print(olmo_pipe("Language modeling is"))

Quantization

olmo=AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B",torch_dtype=torch.float16,load_in_8bit=True)# requires bitsandbytes

The quantized model is sensitive to input types and CUDA handling. To avoid potential issues, we recommend explicitly converting input IDs to CUDA using:inputs.input_ids.to('cuda')

Evaluation

Additional tools for evaluating OLMo models are available at theOLMo Eval andolmes repositories.

Modal.com Hosting

An example script is provided for hosting an OLMo 2 model on Modal.com using the OpenAI API in./scripts/olmo2_modal_openai.py.To run that:

  1. Follow the instructions under Getting Started inthe Modal.com Guide to installthe Modal library and command line tools.
  2. Follow the instructions underSecrets in the Modal.com Guide to create a Modal secret named "example-secret-token"that defines a value for the variable MODAL_TOKEN for your server.
  3. Then run
modal deploy ./scripts/olmo2_modal_openai.py

You can check your endpoint using curl similar to the following:

curl -X POST \  -H"Authorization: Bearer [the secret token from above]" \  -H"Content-Type: application/json" \  -d @body.json \  https://[the web endpoint modal creates above]/v1/chat/completions

wherebody.json is of the form:

{    "model": "OLMo-2-1124-13B-Instruct",    "messages": [        {            "role": "user",            "content": "Who was Alan Turing?"        }      ],    "max_tokens": 100,    "temperature": 0.9,    "stream": true}

Citing

@misc{olmo20242olmo2furious,title={2 OLMo 2 Furious},author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},year={2024},eprint={2501.00656},archivePrefix={arXiv},primaryClass={cs.CL},url={https://arxiv.org/abs/2501.00656}, }

[8]ページ先頭

©2009-2025 Movatter.jp