- Notifications
You must be signed in to change notification settings - Fork576
The official PyTorch implementation of Google's Gemma models
License
google/gemma_pytorch
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Gemma is a family of lightweight, state-of-the art open models built from research and technology used to create Google Gemini models. They include both text-only and multimodal decoder-only large language models, with open weights, pre-trained variants, and instruction-tuned variants. For more details, please check out the following links:
This is the official PyTorch implementation of Gemma models. We provide model and inference implementations using both PyTorch and PyTorch/XLA, and support running inference on CPU, GPU and TPU.
[March 12th, 2025 🔥] Support Gemma v3. You can find the checkpointson Kaggle andHugging Face
[June 26th, 2024] Support Gemma v2. You can find the checkpointson Kaggle and Hugging Face
[April 9th, 2024] Support CodeGemma. You can find the checkpointson Kaggle andHugging Face
[April 5, 2024] Support Gemma v1.1. You can find the v1.1 checkpointson Kaggle andHugging Face.
You can find the model checkpoints on Kaggle:
Alternatively, you can find the model checkpoints on the Hugging Face Hubhere. To download the models, go the the model repository of the model of interest and click theFiles and versions tab, and download the model and tokenizer files. For programmatic downloading, if you havehuggingface_hub installed, you can also run:
huggingface-cli download google/gemma-3-4b-it-pytorchThe following model sizes are available:
- Gemma 3:
- Text only: 1b
- Multimodal: 4b, 12b, 27b_v3
- Gemma 2:
- Text only: 2b-v2, 9b, 27b
- Gemma:
- Text only: 2b, 7b
Note that you can choose between the 1B, 4B, 12B, and 27B variants.
VARIANT=<1b, 2b, 2b-v2, 4b, 7b, 9b, 12b, 27b, 27b_v3>CKPT_PATH=<Insert ckpt path here>Follow the steps athttps://ai.google.dev/gemma/docs/pytorch_gemma.
Prerequisite: make sure you have setup docker permission properly as a non-root user.
sudo usermod -aG docker$USERnewgrp dockerDOCKER_URI=gemma:${USER}docker build -f docker/Dockerfile ./ -t${DOCKER_URI}
NOTE: This is a multimodal example. Use a multimodal variant.
docker run -t --rm \ -v${CKPT_PATH}:/tmp/ckpt \${DOCKER_URI} \ python scripts/run_multimodal.py \ --ckpt=/tmp/ckpt \ --variant="${VARIANT}" \# add `--quant` for the int8 quantized model.
NOTE: This is a multimodal example. Use a multimodal variant.
docker run -t --rm \ --gpus all \ -v${CKPT_PATH}:/tmp/ckpt \${DOCKER_URI} \ python scripts/run_multimodal.py \ --device=cuda \ --ckpt=/tmp/ckpt \ --variant="${VARIANT}"# add `--quant` for the int8 quantized model.
DOCKER_URI=gemma_xla:${USER}docker build -f docker/xla.Dockerfile ./ -t${DOCKER_URI}
DOCKER_URI=gemma_xla_gpu:${USER}docker build -f docker/xla_gpu.Dockerfile ./ -t${DOCKER_URI}
NOTE: This is a multimodal example. Use a multimodal variant.
docker run -t --rm \ --shm-size 4gb \ -e PJRT_DEVICE=CPU \ -v${CKPT_PATH}:/tmp/ckpt \${DOCKER_URI} \ python scripts/run_xla.py \ --ckpt=/tmp/ckpt \ --variant="${VARIANT}" \# add `--quant` for the int8 quantized model.
Note: be sure to use the docker container built fromxla.Dockerfile.
docker run -t --rm \ --shm-size 4gb \ -e PJRT_DEVICE=TPU \ -v${CKPT_PATH}:/tmp/ckpt \${DOCKER_URI} \ python scripts/run_xla.py \ --ckpt=/tmp/ckpt \ --variant="${VARIANT}" \# add `--quant` for the int8 quantized model.
Note: be sure to use the docker container built fromxla_gpu.Dockerfile.
docker run -t --rm --privileged \ --shm-size=16g --net=host --gpus all \ -e USE_CUDA=1 \ -e PJRT_DEVICE=CUDA \ -v${CKPT_PATH}:/tmp/ckpt \${DOCKER_URI} \ python scripts/run_xla.py \ --ckpt=/tmp/ckpt \ --variant="${VARIANT}" \# add `--quant` for the int8 quantized model.
99 unused tokens are reserved in the pretrained tokenizer model to assist with more efficient training/fine-tuning. Unused tokens are in the string format of<unused[0-97]> with token id range of[7-104].
"<unused0>": 7,"<unused1>": 8,"<unused2>": 9,..."<unused98>": 104,This is not an officially supported Google product.
About
The official PyTorch implementation of Google's Gemma models
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.