Run batch inference using GPUs on Cloud Run jobs

You can run batch inference withMeta's Llama 3.2-1b LLM andvLLM on a Cloud Run job, then write the results directly to Cloud Storage using Cloud Run volume mounts.

See a step-by-step instructional codelab atHow to run batch inference on Cloud Run jobs.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.