Dataflow support for GPUs Stay organized with collections Save and categorize content based on your preferences.
Note: The following considerations apply to this GA offering:
This page provides background information on how GPUs work withDataflow, including information about prerequisites and supportedGPU types.
Using GPUs in Dataflow jobs lets you acceleratesome data processing tasks. GPUs can perform certain computations fasterthan CPUs. These computations are usually numeric or linear algebra,often used in image processing and machine learning use cases. Theextent of performance improvement varies by the use case, type of computation,and amount of data processed.
Prerequisites for using GPUs in Dataflow
- To use GPUs with your Dataflow job, you must use Runner v2.
- Dataflow runs user code in worker VMs inside a Docker container. These worker VMs runContainer-Optimized OS. For Dataflow jobs to use GPUs, you need the following prerequisites:
- GPU drivers are installed on worker VMs and accessible to the Docker container. For more information, seeInstall GPU drivers.
- GPU libraries required by your pipeline, such asNVIDIA CUDA-X libraries or theNVIDIA CUDA Toolkit, are installed in the custom container image. For more information, seeConfigure your container image.
- Because GPU containers are typically large, to avoidrunning out of disk space, increase the defaultboot disk size to 50 gigabytes or more.
Pricing
Jobs using GPUs incur charges as specified in the Dataflowpricing page.
Availability
Note: TPUs are also supported with Dataflow. For more information,seeDataflow support for TPUs.The following GPU types are supported with Dataflow:
| GPU type | worker_accelerator string |
|---|---|
| NVIDIA® L4 | nvidia-l4 |
| NVIDIA® A100 40 GB | nvidia-tesla-a100 |
| NVIDIA® A100 80 GB | nvidia-a100-80gb |
| NVIDIA® Tesla® T4 | nvidia-tesla-t4 |
| NVIDIA® Tesla® P4 | nvidia-tesla-p4 |
| NVIDIA® Tesla® V100 | nvidia-tesla-v100 |
| NVIDIA® Tesla® P100 | nvidia-tesla-p100 |
| NVIDIA® H100 | nvidia-h100-80gb |
| NVIDIA® H100 Mega | nvidia-h100-mega-80gb |
For more information about each GPU type, including performance data, seeCompute Engine GPU platforms.
For information about available regions and zones for GPUs, seeGPU regions and zones availabilityin the Compute Engine documentation.
Recommended workloads
The following table provides recommendations for which type of GPU to use fordifferent workloads. The examples in the table are suggestions only, and youneed to test in your own environment to determine the appropriate GPU type foryour workload.
For more detailed information about GPU memory size, feature availability, andideal workload types for different GPU models, see theGeneral comparison charton the GPU platforms page.
| Workload | A100, H100 | L4 | T4 |
|---|---|---|---|
| Model fine tuning | Recommended | ||
| Large model inference | Recommended | Recommended | |
| Medium model inference | Recommended | Recommended | |
| Small model inference | Recommended | Recommended |
What's next
- See an example of adeveloper workflow for building pipelines that use GPUs.
- Learn how torun an Apache Beam pipeline on Dataflow with GPUs.
- Work throughProcessing Landsat satellite images with GPUs.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.