Prepare to migrate to Autopilot from Standard Stay organized with collections Save and categorize content based on your preferences.
This page provides considerations and recommendations that will help you tomigrate workloads from Standard Google Kubernetes Engine (GKE) clusters toAutopilot clusters with minimal disruption to your services. This pageis for cluster administrators who have already decided to migrate toAutopilot. If you need more information before you decide to migrate,seeChoose a GKE mode of operationandCompare GKE Autopilot and Standard.
How migration works
Autopilot clusters automate many of the optional features andfunctionality that require manual configuration in Standard clusters.Additionally, Autopilot clusters enforce more secure defaultconfigurations for applications to provide a more production-ready environment,and reduce your required management overhead compared to Standard mode.Autopilot clusters apply many GKE best practices andrecommendations by default. Autopilot uses a workload-centricconfiguration model, where you request what you need in your Kubernetesmanifests and GKE provisions the corresponding infrastructure.
When you migrate your Standard workloads to Autopilot, youshould prepare your workload manifests to ensure that they're compatible withAutopilot clusters, for example by ensuring that your manifests requestinfrastructure that you would normally have to provision yourself.
To prepare and execute a successful migration, you'll do the followinghigh-level tasks:
- Run a pre-flight check on your existing Standard cluster toconfirm compatibility with Autopilot.
- If applicable, modify your workload manifests to become Autopilotcompatible.
- Do a dry-run where you check that your workloads function correctly onAutopilot.
- Plan and create the Autopilot cluster.
- If applicable, update your infrastructure-as-code tooling.
- Perform the migration.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
- Ensure that you have an existing Standard cluster with runningworkloads.
- Ensure that you have an Autopilot cluster with no workloads toperform dry-runs. To create a new Autopilot cluster, seeCreate an Autopilot cluster.
Enable the pre-flight check component in your cluster
In GKE version 1.31.6-gke.1027000 and later, theAutopilot pre-flight check component is disabled by default. You mustenable the pre-flight check component before you can run the check in a cluster.If your cluster runs a GKE version earlier than1.31.6-gke.1027000, skip to the next section.
Caution: Enabling the pre-flight check component triggers a control plane updateoperation that might take up to 30 minutes to complete. The control plane inzonal Standard clusters is unavailable during this time. RegionalStandard clusters and Autopilot clusters, which update onecontrol plane replica at a time, remain available.Enable the pre-flight check component in your cluster:
gcloudcontainerclustersupdateCLUSTER_NAME\--location=LOCATION\--enable-autopilot-compatibility-auditingReplace the following:
CLUSTER_NAME: the name of your Standardcluster.LOCATION: the location of your Standardcluster, such asus-central1.
The update operation takes up to 30 minutes to complete.
Run a pre-flight check on your Standard cluster
The Google Cloud CLI and the Google Kubernetes Engine API provide apre-flight checktool that validates the specifications of your running Standardworkloads to identify incompatibilities with Autopilot clusters. Thistool is available in GKE version 1.26 and later.
- To use this tool on the command-line, run the following command:
gcloudcontainerclusterscheck-autopilot-compatibilityCLUSTER_NAMEReplaceCLUSTER_NAME with the name of your Standardcluster. Optionally, add--format=json to this command to get the output in aJSON file.
The output contains findings for all your runningStandard workloads, categorized and with actionable recommendations toensure compatibility with Autopilot, where applicable. The followingtable describes the categories:
| Pre-flight tool results | |
|---|---|
Passed | The workload will run as expected with no configuration needed for Autopilot. |
Passed with optional configuration | The workload will run on Autopilot, but you can make optional configuration changes to optimize the experience. If you don't make configuration changes, Autopilot applies a default configuration for you. For example, if your workload was running on N2 machines in Standard mode, GKE applies the general-purposecompute class for Autopilot. You can optionally modify the workload to request the Balanced compute class, which is backed by N2 machines. |
Additional configuration required | The workload won't run on Autopilot unless you make a configuration change. For example, consider a container that uses the NET_ADMIN capability in Standard. Autopilot drops this capability by default for improved security, so you'll need to enable NET_ADMIN on the cluster before you deploy the workload. |
Incompatibility | The workload won't run on Autopilot because it uses functionality that Autopilot doesn't support. For example, Autopilot clusters reject privileged Pods ( |
Modify your workload specifications based on the pre-flight results
After you run the pre-flight check, step through the JSON output and identifyworkloads that need to change. We recommend implementing even the optionalconfiguration recommendations. Each finding also provides a link todocumentation that shows you what the workload specification should look like.
The most important difference between Autopilot and Standardis that infrastructure configuration in Autopilot is automated based onthe workload specification. Kubernetes scheduling controls, such as node taintsand tolerations, are automatically added to your running workloads. If necessary,you should also modify your infrastructure-as-code configurations, such as Helmcharts or Kustomize overlays, to match.
Note: As a best practice, make copies of your original workload specificationsso that rolling back your migration takes less time.Some common configuration changes you'll need to make include thefollowing:
| Common configuration changes for Autopilot | |
|---|---|
| Compute and architecture configuration | Autopilot clusters use the E-series machine type by default. If you need other machine types, your workload specification must request a compute class, which tells Autopilot to place those Pods on nodes that use specific machine types or architectures. For details, seeCompute classes in Autopilot. |
| Accelerators | GPU-based workloads must request GPUs in the workload specification. Autopilot automatically provisions nodes with the required machine type and accelerators. For details, seeDeploy GPU workloads in Autopilot. |
| Resource requests | All Autopilot workloads need to specify values for For details, seeResource requests in Autopilot. |
| Fault-tolerant workloads on Spot VMs | If your workloads run on Spot VMs in Standard, request Spot Pods in the workload configuration by setting a node selector for For details, seeSpot Pods. |
Perform a dry-run on a staging Autopilot cluster
After you modify each workload manifest, do a dry-run deployment on a new stagingAutopilot cluster to ensure that the workload runs as expected.
Command line
Run the following command:
kubectlcreate--dry-run=server-f=PATH_TO_MANIFESTReplacePATH_TO_MANIFEST with the path to the modifiedworkload manifest.
IDE
If you use the Cloud Shell Editor, the dry-run command is built-in and runson any open manifests. If you use Visual Studio Code or Intellij IDEs, installthe Cloud Code extension to automatically run the dry-run on any openmanifests.
TheProblems pane in the IDE shows any dry-run issues, such as in thefollowing image which shows a failed dry-run for a manifest that specifiedprivileged: true.

Plan the destination Autopilot cluster
When your dry-run no longer displays issues, plan and create the newAutopilot cluster for your workloads. This cluster is different from theAutopilot cluster that you used to test your manifest modificationsin the preceding section.
UseAbout cluster configuration choicesfor basic configuration requirements. Then, read theAutopilot overview,which provides information specific to your use case at different layers.
Additionally, consider the following:
- Autopilot clusters are VPC-native, so we don'trecommend migrating to Autopilot from routes-based Standardclusters.
- Use the same or a similar VPC for the Autopilotcluster and the Standard cluster, including any custom firewall rulesand VPC settings.
- Autopilot clusters use GKE Dataplane V2 and only supportCilium NetworkPolicies. Calico NetworkPolicies are not supported.
- If you want to use IP masquerading in Autopilot, use anEgress NAT policy.
- Specify theprimary IPv4 rangefor the cluster during cluster creation, with the same range size as theStandard cluster.
- Learn about thequota differences between modes, especially ifyou have large clusters.
- Learn about thePods-per-node maximumsfor Autopilot, which are different from Standard. Thismatters more if you use node or Pod affinity often.
- All Autopilot clusters useCloud DNS.
Create the Autopilot cluster
When you're ready to create the cluster, useCreate an Autopilot cluster.All Autopilot clusters are regional and are automatically enrolled in arelease channel, although you can specify the channel and cluster version. Werecommend deploying a small sample workload to the cluster to trigger nodeauto-provisioning so that your production workloads can schedule immediately.
Update your infrastructure-as-code tooling
The following infrastructure-as-code providers support Autopilot:
Read your preferred provider's documentation and modify your configurations.
Choose a migration approach
The migration method that you use depends on your individual workload and howcomfortable you are with networking concepts such asmulti-cluster Servicesandmulti-cluster Ingress,as well as how you manage the state of the Kubernetes objects in your cluster.
| Workload type | Pre-flight tool results | Migration approach |
|---|---|---|
| Stateless |
|
For |
| Stateful |
| Use one of the following methods:
|
| Additional configuration required | Update your Kubernetes manifests and redeploy on Autopilot during scheduled downtime. For high-level steps, seeManually migrate stateful workloads. |
High-level migration steps
Before you begin a migration, ensure that you resolved anyIncompatible orAdditional configuration required results from the pre-flight check. If youdeploy workloads with those results on Autopilot without modifications,the workloads will fail.
The following sections are a high-level overview of a hypothetical migration.Actual steps will vary depending on your environment and each of your workloads.Plan, test, and re-test workloads for issues before migrating a productionenvironment. Considerations include the following:
- The duration of the migration process depends on how many workloads you'remigrating.
- Downtime isrequired while you migrate stateful workloads.
- Manual migration lets you focus on individual workloads during the migrationso that you can resolve issues in real time on a case-by-case basis.
- In all cases, ensure that you migrate Services, Ingresses, and otherKubernetes objects that facilitate the functionality of your stateless andstateful workloads.
Migrate all workloads using Backup for GKE
Caution: This approach requires downtime for your cluster, for both stateful andstateless workloads. Notify your users of the upcoming downtime.If all the workloads (stateful and stateless) running in your Standardcluster are compatible with Autopilot and the preflight tool returnseitherPassed orPassed with optional configuration for every workload, youcan useBackup for GKEto back up the entire state of your Standard cluster and workloads andrestore the backup onto the Autopilot cluster.
This approach has the following benefits:
- You can move all workloads from Standard to Autopilotoperation with minimal configuration needed.
- You can move stateless and stateful workloads and retain the relationshipsbetween workloads, as well as associated PersistentVolumes.
- Rollbacks are intuitive and managed by Google. You can roll the entiremigration back or selectively roll back specific workloads.
- You can migrate stateful workloads across Google Cloud regions. Manualmigration of stateful workloads can only happen in the same region.
When you use this method, GKE applies Autopilot defaultconfigurations to workloads that received aPassed with optional configurationresult from the pre-flight tool. Before you migrate these workloads, ensure thatyou're comfortable with those defaults.
Manually migrate stateless workloads with no downtime
To migrate stateless workloads with no downtime for your services, you registerthe source and destination clusters to aGKE Fleetand use multi-cluster Services and multi-cluster Ingress to ensure that yourworkloads remain available during the migration.
- Enable multi-cluster Services and multi-cluster Ingress for your sourcecluster and your destination cluster. For instructions, seeConfiguring multi-cluster ServicesandSetting up Multi Cluster Ingress.
- If you have backend dependencies such as a database workload, export thoseServices from your Standard cluster using multi-cluster Services.This lets workloads in your Autopilot cluster access thedependencies in the Standard cluster. For instructions, seeRegistering a Service for export.
- Deploy a multi-cluster Ingress and a multi-cluster Service to controlinbound traffic between clusters. Configure the multi-cluster Service toonly send traffic to the Standard cluster. For instructions, seeDeploying Ingress across clusters.
- Deploy your stateless workloads with updated manifests to theAutopilot cluster. Your exported multi-cluster Servicesautomatically match and send traffic to the corresponding stateful workloads.
- Update your multi-cluster Service to direct inbound traffic to theAutopilot cluster. For instructions, seeCluster selection.
You're now serving your stateless workloads from the Autopilot cluster.If you only had stateless workloads in the source cluster, and no dependenciesremain, proceed toComplete the migration. If you havestateful workloads, proceed toManually migrate stateful workloads.
Manually migrate stateful workloads
After migrating your stateless workloads, you must quiesce and migrate yourstateful workloads from the Standard cluster. This step requiresdowntime for your cluster.
Note: Your source and destination clusters must be in thesame Google Cloud region to manually migrate existing persistent disks. Ifyou need to migrate the stateful workloads to a cluster in a different region,useBackup for GKE.- Start your environment downtime.
- Quiesce your stateful workloads.
- Ensure that you modified your workload manifests for Autopilotcompatibility. For details, seeModify your workload specifications based on the pre-flight results.
Deploy the workloads on your Autopilot cluster.
Note: Migrate persistent data by re-deploying your PersistentVolumeClaims touse your existing Compute Engine disks. For instructions, seeUsing pre-existing persistent disks as PersistentVolumes.Deploy the Services for your stateful workloads on the Autopilotcluster.
Update your in-cluster networking to let your stateless workloads continueto communicate with their backend workloads:
- If you used a static IP address in your Standard cluster backendServices, reuse that IP address in Autopilot.
- If you let Kubernetes assign an IP address, deploy your backend Services,get the new IP address, and update your DNS to use the new IP address.
At this stage, the following should be true:
- You're running all your stateless workloads in Autopilot.
- Any backend stateful workloads are also running in Autopilot.
- Your stateless and stateful workloads can communicate with each other.
- Your multi-cluster Service directs all inbound traffic to yourAutopilot cluster.
When you've migrated all the workloads and Kubernetes objects to the new cluster,proceed toComplete the migration.
Alternative: Manually migrate all workloads during downtime
Caution: This approach requires downtime for your cluster, for both stateful andstateless workloads. Notify your downstream users of the upcoming downtime.If you don't want to use multi-cluster Services and multi-cluster Ingress tomigrate workloads with minimal downtime, migrate all your workloads duringdowntime. This method results in longer downtime for your services, but doesn'trequire working with multi-cluster features.
- Start your downtime.
- Deploy your stateless manifests on the Autopilot cluster.
- Manually migrate your stateful workloads. For instructions, see theManually migrate stateful workloads section.
- Modify DNS records for both intra-cluster and inbound external traffic touse the new IP addresses of Services.
- End your downtime.
Complete the migration
After moving all your workloads and Services to the new Autopilotcluster, end your downtime and allow your environment to soak for apredetermined duration. When you're satisfied with the state of your migrationand are sure that you won't need to roll the migration back, you can clean upmigration artifacts and complete the migration.
Optional: Clean up multi-cluster features
If you used multi-cluster Ingress and multi-cluster Services to migrate, and youdon't want your Autopilot cluster to remain registered to a Fleet, dothe following:
- For inbound external traffic, deploy an Ingress and set it to the IP addressof the Services that expose your workloads. For instructions, seeIngress for external Application Load Balancers.
- For intra-cluster traffic, such as from frontend workloads to statefuldependencies, update cluster DNS records to use the IP addresses of thoseServices.
- Delete the multi-cluster Ingress and the multi-cluster Service resourcesthat you created during the migration.
- Disable multi-cluster Ingress and multi-cluster Services.
- Unregister the Autopilot cluster from the Fleet.
Delete the Standard cluster
After enough time has passed after the migration completes, and you're satisfiedwith the state of your new cluster, delete the Standard cluster. Werecommend that you keep your backed up Standard manifests.
Roll back a faulty migration
If you experience issues and want to revert to the Standard cluster, doone of the following, depending on how you performed the migration:
If you used Backup for GKE to create backups during the migration,restore the backups onto the original Standard cluster. Forinstructions, seeRestore a backup.
If you manually migrated workloads, repeat the migration steps in the previoussections with the Standard cluster as the destination and theAutopilot cluster as the source. At a high-level, this involves thefollowing steps:
- Start downtime.
- Manually migrate stateful workloads to the Standard cluster. Forinstructions, see theManually migrate stateful workloads section.
- Move stateless workloads to the Standard cluster using theoriginal manifests that you backed up prior to the migration.
- Deploy your Ingress to the Standard cluster and cutover your DNSto the new IP addresses for Services.
- Delete the Autopilot cluster.
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.