Run LLM inference on Cloud Run with Hugging Face TGI Stay organized with collections Save and categorize content based on your preferences.
The following example shows how to run a backend service that runs theHugging Face Text Generation Inference (TGI) toolkit using Llama 3. Hugging Face TGI is an open Large Language Models (LLMs), and can be deployed and served on Cloud Run service with GPUs enabled.
See the entire example atDeploy Llama 3.1 8B with TGI DLC on Cloud Run.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.