- Notifications
You must be signed in to change notification settings - Fork66
NVIDIA/JAX-Toolbox
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
JAX Toolbox provides a public CI, Docker images for popular JAX libraries, and optimized JAX examples to simplify and enhance your JAX development experience on NVIDIA GPUs. It supports JAX libraries such asMaxText andPallas.
We support and test the following JAX frameworks and model architectures. More details about each model and available containers can be found in their respective READMEs.
| Framework | Models | Use cases | Container |
|---|---|---|---|
| maxtext | GPT, LLaMA, Gemma, Mistral, Mixtral | pre-training | ghcr.io/nvidia/jax:maxtext |
| t5x | T5, ViT | pre-training, fine-tuning | ghcr.io/nvidia/jax:t5x |
| t5x | Imagen | pre-training | ghcr.io/nvidia/t5x:imagen-2023-10-02.v3 |
| axlearn | Fuji | pre-training | ghcr.io/nvidia/jax:axlearn |
In all cases,ghcr.io/nvidia/jax:XXX points to latest nightly build of the container forXXX. For a stable reference, useghcr.io/nvidia/jax:XXX-YYYY-MM-DD.
In addition to the public CI, we also run internal CI tests on H100 SXM 80GB and A100 SXM 80GB.
TheJAX image is embedded with the following flags and environment variables for performance tuning of XLA and NCCL:
| XLA Flags | Value | Explanation |
|---|---|---|
--xla_gpu_enable_latency_hiding_scheduler | true | allows XLA to move communication collectives to increase overlap with compute kernels |
There are various other XLA flags users can set to improve performance. For a detailed explanation of these flags, please refer to theGPU performance doc. XLA flags can also be tuned per workload. For example, each script includes a directoryxla_flags.
For a list of previously used XLA flags that are no longer needed, please also refer to theGPU performance page.
| First nightly with new base container | Base container |
|---|---|
| 2025-10-02 | nvcr.io/nvidia/cuda-dl-base:25.09-cuda13.0-devel-ubuntu24.04 |
| 2025-08-22 | nvcr.io/nvidia/cuda-dl-base:25.08-cuda13.0-devel-ubuntu24.04 |
| 2025-07-03 | nvcr.io/nvidia/cuda-dl-base:25.06-cuda12.9-devel-ubuntu24.04 |
| 2025-04-11 | nvcr.io/nvidia/cuda-dl-base:25.03-cuda12.8-devel-ubuntu24.04 |
| 2025-03-04 | nvcr.io/nvidia/cuda-dl-base:25.02-cuda12.8-devel-ubuntu24.04 |
| 2025-01-31 | nvcr.io/nvidia/cuda-dl-base:25.01-cuda12.8-devel-ubuntu24.04 |
| 2025-01-28 | nvcr.io/nvidia/cuda-dl-base:24.11-cuda12.6-devel-ubuntu24.04 |
| 2024-12-07 | nvidia/cuda:12.6.3-devel-ubuntu22.04 |
| 2024-11-06 | nvidia/cuda:12.6.2-devel-ubuntu22.04 |
| 2024-09-25 | nvidia/cuda:12.6.1-devel-ubuntu22.04 |
| 2024-07-24 | nvidia/cuda:12.5.0-devel-ubuntu22.04 |
Seethis page for more information about how to profile JAX programs on GPU.
`bus error` when running JAX in a docker container
Solution:
docker run -it --shm-size=1g ...
Explanation:Thebus error might occur due to the size limitation of/dev/shm. You can address this by increasing the shared memory size usingthe--shm-size option when launching your container.
enroot/pyxis reports error code 404 when importing multi-arch images
Problem description:
slurmstepd: error: pyxis: [INFO] Authentication succeededslurmstepd: error: pyxis: [INFO] Fetching image manifest listslurmstepd: error: pyxis: [INFO] Fetching image manifestslurmstepd: error: pyxis: [ERROR] URL https://ghcr.io/v2/nvidia/jax/manifests/<TAG> returned error code: 404 Not FoundSolution:Upgradeenroot orapply a single-file patch as mentioned in the enroot v3.4.0 release note.
Explanation:Docker has traditionally used Docker Schema V2.2 for multi-arch manifest lists but has switched to using the Open Container Initiative (OCI) format since 20.10. Enroot added support for OCI format in version 3.4.0.
- AWS
- GCP
- Azure
- OCI
About
JAX-Toolbox
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.