Introduction to AI/ML workloads on GKE Stay organized with collections Save and categorize content based on your preferences.
This page provides a conceptual overview of Google Kubernetes Engine (GKE) forAI/ML workloads. GKE is a Google-managed implementation of theKubernetes open source container orchestration platform.
Google Kubernetes Engineprovides a scalable, flexible, and cost-effective platform for running all yourcontainerized workloads, including artificial intelligence andmachine learning (AI/ML) applications. Whether you're training large foundationmodels, serving inference requests at scale, or building a comprehensiveAI platform, GKE offers the control and performance youneed.
This page is for Data and AI specialists, Cloud architects,Operators, and Developers who are looking for ascalable, automated, managed Kubernetes solution to run AI/ML workloads. Tolearn more about common roles, seeCommon GKE user roles and tasks.
Get started with AI/ML workloads on GKE
You can start exploring GKE in minutes by using GKE'sfree tier,which lets you get started with Kubernetes without incurring costs for clustermanagement.
- Try these quickstarts:
- Inference on GKE: deploy an AI large language model (LLM) on GKE for inference using a pre-defined architecture.
- Training on GKE: deploy an AI training model on GKEand store the predictions in Cloud Storage.
- ReadAbout accelerator consumption options for AI/ML workloads, which has guidance and resources for planningand obtaining accelerators (GPUs and TPUs) for your platform.
Common use cases
GKE provides a unified platform that can support all of yourAI workloads.
- Building an AI platform: for enterprise platform teams,GKE provides the flexibility to build a standardized, multi-tenantplatform that serves diverse needs.
- Low-latency online serving: For developers building generative AIapplications, GKE with the Inference Gateway provides theoptimized routing and autoscaling needed to deliver a responsive user experiencewhile controlling costs.
Choose the right platform for your AI/ML workload
Google Cloud offers a spectrum of AI infrastructure products to support yourML journey, from fully managed to fully configurable. Choosing the rightplatform depends on your specific needs for control, flexibility, and level ofmanagement.
Choose GKE when you need deep control, portability, and theability to build a customized, high-performance AI platform.
- Infrastructure control and flexibility: you require a high degree ofcontrol over your infrastructure, need to use custom pipelines, or requirekernel-level customizations.
- Large-scale training and inference: you want to train very large modelsor serve models with minimal latency, by using GKE'sscaling and high performance.
- Cost efficiency at scale: you want to prioritize cost optimization byusing GKE's integration with Spot VMs and Flex-start VMsto effectively manage costs.
- Portability and open standards: you want to avoid vendorlock-in and run your workloads anywhere with Kubernetes, and you already haveexisting Kubernetes expertise or a multi-cloud strategy.
You can also consider these alternatives:
| Google Cloud service | Best for |
|---|---|
| Vertex AI | A fully managed, end-to-end platform to accelerate development and offload infrastructure management. Works well for teams focused on MLOps and rapid time-to-value. For more information, watchChoosing between self-hosted GKE and managed Vertex AI to host AI models. |
| Cloud Run | A serverless platform for containerized inference workloads that can scale to zero. Works well for event-driven applications and serving smaller models cost-effectively. For a comparative deep-dive, seeGKE and Cloud Run. |
How GKE powers AI/ML workloads
GKE offers a suite of specialized components that simplify andaccelerate each stage of the AI/ML lifecycle, from large-scale training tolow-latency inference.
The following table summarizes the GKE features that supportyour AI/ML workloads or operational goals.
| AI/ML workload or operation | How GKE supports you | Key features |
|---|---|---|
| Inference and serving | Optimized to serve AI models elastically, with low latency, high throughput,and cost efficiency. |
|
| Training and fine-tuning | Provides the scale and orchestration capabilities necessary to efficiently train very large models while minimizing costs. |
|
| Unified AI/ML development | Managed support for Ray, an open-source framework for scaling distributed Python applications. |
|
What's next
- To explore our extensive collections of official guides, tutorials, andother resources for running AI/ML workloads on GKE, visit theAI/ML orchestration on GKE portal.
- Learn about techniques to obtain computing accelerators, such as GPUs or TPUs, for your AI/ML workloads on GKE.
- Learn about AI/ML model inference on GKE.
- Learn about Ray on GKE.
- Explore experimental samples for leveraging GKE to accelerate your AI/ML initiatives inGKE AI Labs.
- View details for your AI/ML workloads in Google Cloud console,including resources such as JobSets, RayJobs, PyTorchJobs, and Deploymentsfor inference serving.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.