JAX Toolbox provides a public CI, Docker images for popular JAX libraries, and optimized JAX examples to simplify and enhance your JAX development experience on NVIDIA GPUs. It supports JAX libraries such as MaxText, Paxml, and Pallas.
We support and test the following JAX frameworks and model architectures. More details about each model and available containers can be found in their respective READMEs.
Framework | Models | Use cases | Container |
---|---|---|---|
maxtext | GPT, LLaMA, Gemma, Mistral, Mixtral | pretraining | ghcr.io/nvidia/jax:maxtext |
paxml | GPT, LLaMA, MoE | pretraining, fine-tuning, LoRA | ghcr.io/nvidia/jax:pax |
t5x | T5, ViT | pre-training, fine-tuning | ghcr.io/nvidia/jax:t5x |
t5x | Imagen | pre-training | ghcr.io/nvidia/t5x:imagen-2023-10-02.v3 |
big vision | PaliGemma | fine-tuning, evaluation | ghcr.io/nvidia/jax:gemma |
levanter | GPT, LLaMA, MPT, Backpacks | pretraining, fine-tuning | ghcr.io/nvidia/jax:levanter |
Components | Container | Build | Test |
---|---|---|---|
ghcr.io/nvidia/jax:base | | [no tests] | |
ghcr.io/nvidia/jax:jax | |
| |
ghcr.io/nvidia/jax:levanter |
|
| |
ghcr.io/nvidia/jax:equinox |
| [tests disabled] | |
ghcr.io/nvidia/jax:triton |
| ||
ghcr.io/nvidia/jax:upstream-t5x |
| ||
ghcr.io/nvidia/jax:t5x |
| ||
ghcr.io/nvidia/jax:upstream-pax |
| ||
ghcr.io/nvidia/jax:pax |
| ||
ghcr.io/nvidia/jax:maxtext |
| ||
ghcr.io/nvidia/jax:gemma |
|
In all cases, ghcr.io/nvidia/jax:XXX
points to latest nightly build of the container for XXX
. For a stable reference, use ghcr.io/nvidia/jax:XXX-YYYY-MM-DD
.
In addition to the public CI, we also run internal CI tests on H100 SXM 80GB and A100 SXM 80GB.
The JAX image is embedded with the following flags and environment variables for performance tuning of XLA and NCCL:
XLA Flags | Value | Explanation |
---|---|---|
--xla_gpu_enable_latency_hiding_scheduler | true | allows XLA to move communication collectives to increase overlap with compute kernels |
--xla_gpu_enable_triton_gemm | false | use cuBLAS instead of Trition GeMM kernels |
Environment Variable | Value | Explanation |
---|---|---|
CUDA_DEVICE_MAX_CONNECTIONS | 1 | use a single queue for GPU work to lower latency of stream operations; OK since XLA already orders launches |
NCCL_NVLS_ENABLE | 0 | Disables NVLink SHARP (1). Future releases will re-enable this feature. |
There are various other XLA flags users can set to improve performance. For a detailed explanation of these flags, please refer to the GPU performance doc. XLA flags can be tuned per workflow. For example, each script in contrib/gpu/scripts_gpu sets its own XLA flags.
For a list of previously used XLA flags that are no longer needed, please also refer to the GPU performance page.
First nightly with new base container | Base container |
---|---|
2024-11-06 | nvidia/cuda:12.6.2-devel-ubuntu22.04 |
2024-09-25 | nvidia/cuda:12.6.1-devel-ubuntu22.04 |
2024-07-24 | nvidia/cuda:12.5.0-devel-ubuntu22.04 |
See this page for more information about how to profile JAX programs on GPU.
Solution:
docker run -it --shm-size=1g ...
Explanation:The bus error
might occur due to the size limitation of /dev/shm
. You can address this by increasing the shared memory size using
the --shm-size
option when launching your container.
Problem description:
slurmstepd: error: pyxis: [INFO] Authentication succeeded slurmstepd: error: pyxis: [INFO] Fetching image manifest list slurmstepd: error: pyxis: [INFO] Fetching image manifest slurmstepd: error: pyxis: [ERROR] URL https://ghcr.io/v2/nvidia/jax/manifests/returned error code: 404 Not Found
Solution:Upgrade enroot or apply a single-file patch as mentioned in the enroot v3.4.0 release note.
Explanation:Docker has traditionally used Docker Schema V2.2 for multi-arch manifest lists but has switched to using the Open Container Initiative (OCI) format since 20.10. Enroot added support for OCI format in version 3.4.0.
AWS
Add EFA integration
SageMaker code sample
GCP
Getting started with JAX multi-node applications with NVIDIA GPUs on Google Kubernetes Engine
Azure
Accelerating AI applications using the JAX framework on Azure’s NDm A100 v4 Virtual Machines
OCI
Running a deep learning workload with JAX on multinode multi-GPU clusters on OCI
JAX | NVIDIA NGC Container
Slurm and OpenMPI zero config integration
Adding custom GPU ops
Triaging regressions
Equinox for JAX: The Foundation of an Ecosystem for Science and Machine Learning
Scaling Grok with JAX and H100
JAX Supercharged on GPUs: High Performance LLMs with JAX and OpenXLA
What's New in JAX | GTC Spring 2024
What's New in JAX | GTC Spring 2023