You are viewing legacy v1.20 Service Mesh documentation.
Available versions
Cloud Service Mesh latest
Cloud Service Mesh 1.26 archive
Cloud Service Mesh 1.24 archive
Cloud Service Mesh 1.24 archive
Cloud Service Mesh 1.23 archive
Cloud Service Mesh 1.22 archive
Cloud Service Mesh 1.21 archive
Cloud Service Mesh 1.20 archive
Anthos Service Mesh 1.19 archive
Migrating from Istio on GKE to Cloud Service Mesh
This guide shows how to upgrade a Google Kubernetes Engine (GKE) cluster withIstio on Google Kubernetes Engine (Istio on GKE) version 1.4 or 1.6(Beta) tomanaged Cloud Service Mesh with the Google-managedcontrol plane and Cloud Service Mesh certificate authority.
Prerequisites
The following prerequisites are required to complete this guide:
A GKE cluster with Istio on GKE enabled.If you have multiple GKE clusters, follow the same steps forall clusters.
Istio on GKE must be version 1.4 or 1.6.
Ensure that you are running GKE version 1.17.17-gke.3100+,1.18.16-gke.1600+, 1.19.8-gke.1600+, or later.
The GKE cluster must be running in one oftheselocations.
The user or Service Account running this script requires the IAMpermissions documented inSetting up your project.
This guide is tested on Cloud Shell, so we recommend that you useCloud Shell to perform the steps in this guide.
Objectives
- Deploy Cloud Service Mesh Google-managed control plane in the regular channel. This guide is specific to the regular channel, stable or rapid channels require slightly modified instructions.To learn more about release channels, please visit thislink.
- Migrate Istio configurations to Cloud Service Mesh.
- Configure Cloud Service Mesh certificate authority.
- Migrate applications to Cloud Service Mesh.
- Upgrade
istio-ingressgatewayfrom Istio on GKE to Cloud Service Mesh. - Finalize Cloud Service Mesh migration or roll back to Istio on GKE.
Set up your environment
To set up your environment, follow these steps:
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console page, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is ashell environment with the Google Cloud CLI and theGoogle Cloud CLI already installed, and with values already set for your current project. Itcan take a few seconds for the session to initialize.
Note: The script does not work on macOS. The script has been tested onCloud Shell only. The script requires the following applications(which are all included by default in Cloud Shell):
Create the environment variables used in this guide:
# Enter your project IDexport PROJECT_ID=PROJECT_ID# Copy and paste the followinggcloud config set project ${PROJECT_ID}export PROJECT_NUM=$(gcloud projects describe ${PROJECT_ID} --format='value(projectNumber)')export CLUSTER_1=GKE_CLUSTER_NAMEexport CLUSTER_1_LOCATION=GKE_CLUSTER_REGION_OR_ZONEexport SHELL_IP=$(curl ifconfig.me) # This is required for private clusters with `master-authorized-networks` enabled.Create a
WORKDIRfolder. All files associated with this guide end up inWORKDIRso that you can deleteWORKDIRwhen you are finished.mkdir-paddon-to-asm &&cdaddon-to-asm &&exportWORKDIR=`pwd`Create a
KUBECONFIGfile for this guide. You can also use yourexistingKUBECONFIGfile that contains the cluster context for theGKE cluster to be migrated to Cloud Service Mesh.touchasm-kubeconfig &&exportKUBECONFIG=`pwd`/asm-kubeconfigGet credentials for the GKE cluster and store the contextin a variable:
Zonal clusters
gcloud container clusters get-credentials ${CLUSTER_1} \ --zone=${CLUSTER_1_LOCATION}export CLUSTER_1_CTX=gke_${PROJECT_ID}_${CLUSTER_1_LOCATION}_${CLUSTER_1}Regional clusters
gcloud container clusters get-credentials ${CLUSTER_1} \ --region=${CLUSTER_1_LOCATION}export CLUSTER_1_CTX=gke_${PROJECT_ID}_${CLUSTER_1_LOCATION}_${CLUSTER_1}Your clusters must beregistered to a fleet. This step can be done separately prior to the installation or as part of the installation by passing the --fleet-id and one of the --enable-all or --enable-registration flags.
Your project must have the Service Mesh Feature enabled. You could enable it as part of the installation by passing one of the --enable-all or --enable-registration flags, or by running the following command prior to the installation:
gcloud container hub mesh enable --project=FLEET_PROJECT_IDwhereFLEET_PROJECT_ID is the project-id of the fleet host project.
Optional step
If the cluster is a private cluster (withmaster-authorized-networks enabled),add your$SHELL_IP to themaster-authorized-networks allowlist.If you already have access to your cluster, this step might not be required.
Zonal clusters
export SHELL_IP=$(curl ifconfig.me)gcloud container clusters update ${CLUSTER_1} \ --zone=${CLUSTER_1_LOCATION} \ --enable-master-authorized-networks \ --master-authorized-networks ${SHELL_IP}/32Regional clusters
export SHELL_IP=$(curl ifconfig.me)gcloud container clusters update ${CLUSTER_1} \ --region=${CLUSTER_1_LOCATION} \ --enable-master-authorized-networks \ --master-authorized-networks ${SHELL_IP}/32Install Cloud Service Mesh
In this section, you deploy Cloud Service Mesh withthe Google-managed control plane of regular channel on the GKE cluster. This controlplane is initially deployed alongside as a second (or canary)control plane.
Download the latest version of the script that installs Cloud Service Meshto the current working directory, and make the script executable:
curl https://storage.googleapis.com/csm-artifacts/asm/asmcli > asmclichmod +x asmcliTo configure the GKE cluster, run the installation scriptto install Cloud Service Mesh with the Google-managed control plane of regular channel:
./asmcliinstall\-p${PROJECT_ID}\-l${CLUSTER_1_LOCATION}\-n${CLUSTER_1}\--fleet_id${FLEET_PROJECT_ID}\--managed\--verbose\--output_dir${CLUSTER_1}\--enable-all\--channelregularThis step can take a few minutes to complete.
Copy
istioctlto theWORKDIRfolder:cp./${CLUSTER_1}/istioctl${WORKDIR}/.
In the next section, you download and run themigrate_addon script to assistin migrating to Cloud Service Mesh. Theistioctl command-line utilityneeds to be in the same folder as themigrate_addon script. You use theWORKDIR folder for both theistioctl command-line utility and themigrate_addon script.
Migrate configurations to Cloud Service Mesh
In this section, you migrate Istio on GKE configurations toCloud Service Mesh. The guided script identifies which configurations can andcannot be migrated.
Download the migration tool and make it executable:
curlhttps://raw.githubusercontent.com/GoogleCloudPlatform/anthos-service-mesh-packages/main/scripts/migration/migrate-addon >${WORKDIR}/migrate_addonchmod+x${WORKDIR}/migrate_addonDisable the Galley validation webhook. This step is required to migrate someof the 1.4 configurations to Cloud Service Mesh. Answer
Yto bothquestions:${WORKDIR}/migrate_addon-dtmpdir--commanddisable-galley-webhookThe output is similar to the following:
tmpdir directory not present. Create directory? Continue? [Y/n] YDisabling the Istio validation webhook... Continue? [Y/n] YRunning: kubectl get clusterrole istio-galley-istio-system -n istio-system -o yamlRunning: kubectl patch clusterrole -n istio-system istio-galley-istio-system --type=json -p=[{"op": "replace", "path": "/rules/2/verbs/0", "value": "get"}]clusterrole.rbac.authorization.k8s.io/istio-galley-istio-system patchedRunning: kubectl get ValidatingWebhookConfiguration istio-galley --ignore-not-foundRunning: kubectl delete ValidatingWebhookConfiguration istio-galley --ignore-not-foundvalidatingwebhookconfiguration.admissionregistration.k8s.io "istio-galley" deletedVerify and manually migrate the configuration. This step helps identifysome of the configurations that need to be manually migrated beforemigrating workloads to the Google-managed control plane.
${WORKDIR}/migrate_addon-dtmpdir--commandconfig-checkThe output is similar to the following:
Installing the authentication CR migration tool...OKChecking for configurations that will need to be explicitly migrated...No resources found
Migrate custom configurations
You might need to manually migrate custom configurations before you migrate toCloud Service Mesh. The preceding script identifies custom configurations andprints information about what is required. These customizations are asfollows:
Detected custom envoy filters are not supported by Cloud Service Mesh.Remove these if possible. Envoy filters are currently not supportedin the Google-managed control plane.
Detected custom plugin certificate. The plugin certificates will not bemigrated to Cloud Service Mesh. Ifplugin certificates are used with Istio on GKE, these certificates are not usedafter the workloads migrate to the Google-managed control plane. All workloadsuse certificates signed by Google Cloud Service Mesh certificate authority. Plugin certificates arenot supported by Cloud Service Mesh certificate authority. This message is informational and noaction is required.
Detected security policies that could not be migrated. <Error reason>.This usually fails because of alpha AuthZ policies that need to bemanually migrated. For more context and information about how to migratepolicies, seeMigrate pre-Istio 1.4 Alpha security policy to the current APIs.For more information regarding the error message, seesecurity-policy-migrate.
Detected possibly incompatible VirtualService config. <Specificdeprecated config>. You need to update the following
VirtualServiceconfigurations:- Use of
appendHeadersis not supported. Usespec.http.headersinstead. - Use of
websocketUpgradeis not needed. It is turned on by default. - Replace the field
abort.percentwithabort.percentage.
- Use of
Detected custom installation of mixer resources that could not bemigrated. Requires manual migration to telemetryv2. If custom mixerpolicies are configured in addition to the defaultIstio on GKE installation, you need to manually migrate thesepolicies to telemetry v2. For more information about how to do this, seeCustomizing Istio Metrics.
Deployment <deploymentName> could be a custom gateway. Migratethis manually. You need to manually migrate all gateway Deployments otherthan the
istio-ingressgateway(which is installed by default). Forinformation about how to upgrade gateways for theGoogle-managed control plane, seeConfiguring the Google-managed control plane.
To migrate configurations, follow these steps:
Manually migrate all custom configurations (except for the last configurationlisted) before proceeding to step 2.
Use the migration tool to migrate the configurations that can beautomatically migrated (or ignored).
${WORKDIR}/migrate_addon-dtmpdir--commandmigrate-configsThe output is similar to the following:
Converting authentication CRs...2021/06/25 20:44:58 found root namespace: istio-system2021/06/25 20:44:59 SUCCESS converting policy /defaultRunning: kubectl apply --dry-run=client -f beta-policy.yamlpeerauthentication.security.istio.io/default created (dry run)Applying converted security policies in tmpdir/beta-policy.yaml... Continue? [Y/n] YRunning: kubectl apply -f beta-policy.yamlpeerauthentication.security.istio.io/default createdOK
Apply the Cloud Service Mesh certificate authority root trust. This lets you migrate from thecurrent Citadel CA to Cloud Service Mesh certificate authority without incurring any downtimeto your applications.
${WORKDIR}/migrate_addon-dtmpdir--commandconfigure-mesh-caThe output is similar to the following:
Configuring Istio on GKE to trust Anthos Service Mesh... Continue? [Y/n] YRunning: kubectl get cm -n istio-system istio-asm-managed -oyamlRunning: kubectl -n istio-system apply -f -secret/meshca-root createdRunning: kubectl get cm istio -n istio-system -o yamlRunning: kubectl get cm istio -n istio-system -o yamlRunning: kubectl replace -f -configmap/istio replacedRunning: kubectl get deploy istio-pilot -n istio-system -o yamlRunning: kubectl patch deploy istio-pilot -n istio-system -p={"spec":{"template":{"spec":{"containers":[{ "name":"discovery", "image":"gcr.io/gke-release/istio/pilot:1.4.10-gke.12", "env":[{"name":"PILOT_SKIP_VALIDATE_TRUST_DOMAIN","value":"true"}] }]}}}}deployment.apps/istio-pilot patchedRunning: kubectl get deploy istio-citadel -n istio-system -o yamlRunning: kubectl patch deploy istio-citadel -n istio-system -p={"spec":{"template":{"spec":{ "containers":[{ "name":"citadel", "args": ["--append-dns-names=true", "--grpc-port=8060", "--citadel-storage-namespace=istio-system", "--custom-dns-names=istio-pilot-service-account.istio-system:istio-pilot.istio-system", "--monitoring-port=15014", "--self-signed-ca=true", "--workload-cert-ttl=2160h", "--root-cert=/var/run/root-certs/meshca-root.pem"], "volumeMounts": [{"mountPath": "/var/run/root-certs", "name": "meshca-root", "readOnly": true}] }], "volumes": [{"name": "meshca-root", "secret":{"secretName": "meshca-root"}}] }}}}deployment.apps/istio-citadel patchedOKWaiting for root certificate to distribute to all pods. This will take a few minutes...ASM root certificate not distributed to asm-system, trying again laterASM root certificate not distributed to asm-system, trying again laterASM root certificate distributed to namespace asm-systemASM root certificate distributed to namespace defaultASM root certificate distributed to namespace istio-operatorASM root certificate not distributed to istio-system, trying again laterASM root certificate not distributed to istio-system, trying again laterASM root certificate distributed to namespace istio-systemASM root certificate distributed to namespace kube-node-leaseASM root certificate distributed to namespace kube-publicASM root certificate distributed to namespace kube-systemASM root certificate distributed to namespace online-boutiqueWaiting for proxies to pick up the new root certificate...OKConfiguring Istio Addon 1.6 to trust Anthos Service Mesh...Running: kubectl get cm -n istio-system env-asm-managed -ojsonpath={.data.TRUST_DOMAIN} --ignore-not-foundRunning: kubectl get cm istio-istio-1611 -n istio-system -o yamlRunning: kubectl replace -f -configmap/istio-istio-1611 replacedRunning: kubectl patch -n istio-system istiooperators.install.istio.io istio-1-6-11-gke-0 --type=mergeistiooperator.install.istio.io/istio-1-6-11-gke-0 patchedRunning: kubectl -n istio-system get secret istio-ca-secret -ojsonpath={.data.ca-cert\.pem}Running: kubectl -n istio-system patch secret istio-ca-secretsecret/istio-ca-secret patchedRunning: kubectl patch deploy istiod-istio-1611 -n istio-systemdeployment.apps/istiod-istio-1611 patchedRunning: kubectl rollout status -w deployment/istiod-istio-1611 -n istio-systemWaiting for deployment "istiod-istio-1611" rollout to finish: 1 old replicas are pending termination...deployment "istiod-istio-1611" successfully rolled outRunning: kubectl apply -f - -n istio-systemenvoyfilter.networking.istio.io/trigger-root-cert createdWaiting for proxies to pick up the new root certificate...Running: kubectl delete envoyfilter trigger-root-cert -n istio-systemOKThis step takes a few minutes for the Cloud Service Mesh root certificateto be distributed to all namespaces. Wait until the script finishes with an
OKmessage.
The previous step does the following:
- Installs the Cloud Service Mesh certificate authority root of trust for all workloads in thecluster.
Changes the configurations of the control plane Deployments
istio-pilot,istiod, andistio-citadel. The changes include the following:- Upgrading the images to the latest builds.
- Disabling
trust-domainverification by settingPILOT_SKIP_VALIDATE_TRUST_DOMAIN=true. - Adding the Cloud Service Mesh certificate authority root of trust to
istio-citadelto distributetheConfigMapto all namespaces. - Adding the Cloud Service Mesh certificate authority root of trust to
istio-ca-secretto distribute the rootcertificate.
Stores the older configuration manifests in the
tmpdir.Provides steps for the rollback function (documented later).
Migrate workloads to Cloud Service Mesh
In this section, you migrate workloads running on Istio on GKE to Cloud Service Mesh. After migration, you verify that the correct sidecarproxies (Cloud Service Mesh) are injected in every Pod and that the applicationis working as expected.
If you are performing this procedure on an existing cluster, select a namespaceto be migrated.
Define the namespace as a variable; this namespace is migrated toCloud Service Mesh:
export NAMESPACE=NAMESPACE_NAME
To migrate workloads to Cloud Service Mesh, you must relabel the namespacefor Cloud Service Mesh. Labeling the namespace allows Cloud Service Meshto automatically inject sidecars to all workloads. To label the namespace,run the following command, setting the label to
asm-managed:kubectl--context=${CLUSTER_1_CTX}labelnamespace${NAMESPACE}istio.io/rev=asm-managedistio-injection---overwritePerform a rolling restart of all Deployments in the namespace:
kubectl--context=${CLUSTER_1_CTX}rolloutrestartdeployment-n${NAMESPACE}The output is similar to the following:
deployment.apps/deploymentName1 restarteddeployment.apps/deploymentName2 restarted...
Ensure that all Pods are restarted and are running with two containers perPod:
kubectl--context=${CLUSTER_1_CTX}-n${NAMESPACE}getpodsThe output is similar to the following:
NAME READY STATUS RESTARTS AGEdeploymentName1-PodName 2/2 Running 0 101sdeploymentName2-PodName 2/2 Running 2 100s...
A good way to verify this step is by looking at the
AGEof the Pods. Ensurethat the value is short—for example, a few minutes.Check the sidecar Envoy proxy version from any one of the Pods from anyDeployment in the namespace to confirm that you now have Cloud Service Mesh Envoy proxies deployed:
export POD_NAME=NAME_OF_ANY_POD_IN_NAMESPACEkubectl --context=${CLUSTER_1_CTX} get pods ${POD_NAME} -n ${NAMESPACE} -o json | jq '.status.containerStatuses[].image'The output is similar to the following:
"gcr.io/gke-release/asm/proxyv2:1.11.5-asm.3""appContainerImage"
Verify and test your applications after restarting.
kubectl --context=${CLUSTER_1_CTX} -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'(Optional) If you want Google to manage upgrades of the proxies, enable theGoogle-managed data plane
View migration status
Run the following command to view the status of the migration:
kubectl get cm/asm-addon-migration-state -n istio-system -ojsonpath={.data}The output indicates whether the migrations is complete, pending, or failed:
{"migrationStatus":"SUCCESS"}{"migrationStatus":"PENDING"}{"migrationStatus":"MIGRATION_CONFIG_ERROR"}{"migrationStatus":"CONTROLPLANE_PROVISION_ERROR"}IfmigrationStatus outputsSUCCESS, the control plane has successfullyupgraded to Cloud Service Mesh. To manually update the data plane, complete the stepsinMigrate workloads.
IfmigrationStatus outputs any other status thanSUCCESS, you can choose toeither:
- Take no extra action if the migration error does not impact your existingIstio on GKE workloads. Otherwiserollback if needed.
- Update the custom configurationsin the cluster and rerun the migration manually if
migrationStatusshowsMIGRATION_CONFIG_ERROR.
You can view the control plane metrics in Metrics Explorer after successful migration, seeverify_control_plane_metrics
Access Cloud Service Mesh dashboards
In this section, you go to the Cloud Service Mesh dashboards and make surethat you are receiving the golden signals for all Services. You should also beable to see your application topology.
In the Google Cloud console, go to theCloud Service Mesh page.
You should be able to view the metrics and topology for your Services.
To learn more about Cloud Service Mesh dashboards, seeExploring Cloud Service Mesh in the Google Cloud console.
Note: The topology and table view in the Cloud Service Mesh dashboards mighttake a few minutes to represent the correct data.Complete a successful migration
In this section, you finalize your Istio on GKE toCloud Service Mesh migration. Before proceeding with this section, make surethat you want to proceed with Cloud Service Mesh. This section also helps youclean up your Istio on GKE artifacts. If you want to roll back to Istio on GKE, proceed to thenext section.
Replace the
istio-ingressgateway(part of standardIstio on GKE) with the Google-managed control plane versionedgateway:${WORKDIR}/migrate_addon-dtmpdir--commandreplace-gatewayThe output is similar to the following:
Replacing the ingress gateway with an Anthos Service Mesh gateway... Continue? [Y/n] YRunning: kubectl label namespace istio-system istio-injection- istio.io/rev- --overwritelabel "istio.io/rev" not found.namespace/istio-system labeledRunning: kubectl apply -f -serviceaccount/asm-ingressgateway createddeployment.apps/asm-ingressgateway createdrole.rbac.authorization.k8s.io/asm-ingressgateway createdrolebinding.rbac.authorization.k8s.io/asm-ingressgateway createdRunning: kubectl wait --for=condition=available --timeout=600s deployment/asm-ingressgateway -n istio-systemdeployment.apps/asm-ingressgateway condition metScaling the Istio ingress gateway to zero replicas... Continue? [Y/n] YRunning: kubectl -n istio-system patch hpa istio-ingressgateway --patch {"spec":{"minReplicas":1}}horizontalpodautoscaler.autoscaling/istio-ingressgateway patched (no change)Running: kubectl -n istio-system scale deployment istio-ingressgateway --replicas=0deployment.apps/istio-ingressgateway scaledOKReconfigure the webhook to use the Google-managed control plane; allworkloads start by using the Google-managed control plane:
${WORKDIR}/migrate_addon-dtmpdir--commandreplace-webhookThe output is similar to the following:
Configuring sidecar injection to use Anthos Service Mesh by default... Continue? [Y/n] YRunning: kubectl patch mutatingwebhookconfigurations istio-sidecar-injector --type=json -p=[{"op": "replace", "path": "/webhooks"}]mutatingwebhookconfiguration.admissionregistration.k8s.io/istio-sidecar-injector patchedRevision tag "default" created, referencing control plane revision "asm-managed". To enable injection using thisrevision tag, use 'kubectl label namespace <NAMESPACE> istio.io/rev=default'OKRelabel all the namespaces with the Cloud Service Mesh label, and perform arolling restart of all workloads to get them on theGoogle-managed control plane:
export NAMESPACE=NAMESPACE_NAME \ kubectl --context=${CLUSTER_1_CTX} label namespace ${NAMESPACE} istio.io/rev=asm-managed istio-injection- --overwrite` kubectl --context=${CLUSTER_1_CTX} rollout restart deployment -n${NAMESPACE}You can ignore the message
"istio-injection not found"in theoutput. That means that the namespace didn't previously have theistio-injectionlabel, which you should expect in newinstallations of Cloud Service Mesh or new deployments. Because auto-injectionfails if a namespace has both theistio-injectionand therevision label, allkubectl labelcommands in theIstio on GKE documentation include removing theistio-injectionlabel.Finalize the migration by running the following command:
${WORKDIR}/migrate_addon-dtmpdir--commandwrite-markerThe output is similar to the following:
Current migration state: SUCCESSRunning: kubectl apply -f -configmap/asm-addon-migration-state createdOK
Disable Istio on GKE by running the following command:
Zonal clusters
gcloud beta container clusters update ${CLUSTER_1} \ --project=$PROJECT_ID \ --zone=${CLUSTER_1_LOCATION} \ --update-addons=Istio=DISABLEDRegional clusters
gcloud beta container clusters update ${CLUSTER_1} \ --project=$PROJECT_ID \ --region=${CLUSTER_1_LOCATION} \ --update-addons=Istio=DISABLEDClean up configurations by running the following command:
${WORKDIR}/migrate_addon-dtmpdir--commandcleanupThe output is similar to the following:
Cleaning up old resources...Running: kubectl get cm -n istio-system asm-addon-migration-state -ojsonpath={.data.migrationStatus}Will delete IstioOperator/istio-1-6-11-gke-0.istio-systemWill delete ServiceAccount/istio-citadel-service-account.istio-system...Will delete DestinationRule/istio-policy.istio-systemWill delete DestinationRule/istio-telemetry.istio-systemWill delete Secret/istio-ca-secret.istio-systemDeleting resources previously listed... Continue? [Y/n] YRunning: kubectl delete IstioOperator istio-1-6-11-gke-0 -n istio-system --ignore-not-foundistiooperator.install.istio.io "istio-1-6-11-gke-0" deletedRunning: kubectl delete ServiceAccount istio-citadel-service-account -n istio-system --ignore-not-foundserviceaccount "istio-citadel-service-account" deleted-ingressgateway -n istio-system --ignore-not-found...Running: kubectl delete Secret istio-ca-secret -n istio-system --ignore-not-foundsecret "istio-ca-secret" deletedRunning: kubectl delete -n istio-system jobs -lk8s-app=istio,app=securityjob.batch "istio-security-post-install-1.4.10-gke.8" deletedEnsure that Istio on GKE Deployments and Services have beensuccessfully removed from the cluster:
kubectl--context=${CLUSTER_1_CTX}-nistio-systemgetdeployments,servicesThe output is similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/asm-ingressgateway 1/1 1 1 10mNAME TYPE CLUSTER-IP EXTERNAL-IP AGE PORT(S)service/istio-ingressgateway LoadBalancer 10.64.5.208 34.139.100.237 95m 15020:31959/TCP,80:30971/TCP,443:31688/TCP,31400:31664/TCP,15029:32493/TCP,15030:31722/TCP,15031:30198/TCP,15032:31910/TCP,15443:31222/TCP
You only see the Cloud Service Mesh ingress gateway Service and Deployment.
Congratulations. You have successfully migrated from Istio on GKE to Cloud Service Mesh with the Google-managed control plane andCloud Service Mesh certificate authority without any downtime to your applications.
Roll back changes
In this section, if you do not want to proceed with Cloud Service Mesh, you canroll back your Cloud Service Mesh changes. After completing this section, yourworkloads move back to Istio on GKE.
Rollback the mutating webhook changes:
${WORKDIR}/migrate_addon -d tmpdir --command rollback-mutatingwebhookRelabel the namespaces to use Istio on GKE sidecar injectioninstead of Cloud Service Mesh by running the following command:
for namespaces with version 1.4 workloads:
export NAMESPACE=NAMESPACE_NAMEkubectl --context=${CLUSTER_1_CTX} label namespace ${NAMESPACE} istio.io/rev- istio-injection=enabled --overwritefor namespaces with version 1.6 workloads:
export NAMESPACE=NAMESPACE_NAMEkubectl --context=${CLUSTER_1_CTX} label namespace ${NAMESPACE} istio.io/rev=istio-1611 --overwritePerform a rolling restart of all Deployments in the namespace:
kubectl--context=${CLUSTER_1_CTX}rolloutrestartdeployment-n${NAMESPACE}Wait a few minutes and ensure that all Pods are running:
kubectl--context=${CLUSTER_1_CTX}-n${NAMESPACE}getpodsThe output is similar to the following:
NAME READY STATUS RESTARTS AGEdeploymentName1-PodName 2/2 Running 0 101sdeploymentName2-PodName 2/2 Running 2 100s...
Verify the sidecar Envoy proxy version from any one of the Pods to confirmthat you have Istio on GKE v1.4 Envoy proxies deployed:
export POD_NAME=NAME_OF_ANY_POD_IN_NAMESPACEkubectl --context=${CLUSTER_1_CTX} get pods ${POD_NAME} -n ${NAMESPACE} -o json | jq '.status.containerStatuses[].image'The output is similar to the following:
"gke.gcr.io/istio/proxyv2:1.4.10-gke.8""appContainerImage"
or
"gke.gcr.io/istio/proxyv2:1.6.14-gke.4""appContainerImage"
Verify and test your applications after restarting.
Roll back the Cloud Service Mesh certificate authority changes:
${WORKDIR}/migrate_addon-dtmpdir--commandrollback-mesh-caRe-enable the Istio Galley webhook:
${WORKDIR}/migrate_addon-dtmpdir--commandenable-galley-webhook
You have successfully rolled back your changes to Istio on GKE.
Deploy Online Boutique
In this section, you deploy a sample microservices-based application calledOnline Boutique to the GKE cluster. Online Boutique is deployedin an Istio-enabled namespace. You verify that the application is working andthat Istio on GKE is injecting the sidecar proxies to every Pod.
If you already have existing clusters with applications, you can skip creating anew namespace and deploying Online Boutique. You can follow the same process forall namespaces in theMigrate workloads to Cloud Service Mesh section.
Deploy Online Boutique to the GKE cluster:
kptpkgget\https://github.com/GoogleCloudPlatform/microservices-demo.git/release\online-boutiquekubectl--context=${CLUSTER_1_CTX}createnamespaceonline-boutiquekubectl--context=${CLUSTER_1_CTX}labelnamespaceonline-boutiqueistio-injection=enabledkubectl--context=${CLUSTER_1_CTX}-nonline-boutiqueapply-fonline-boutiqueWait until all Deployments are ready:
kubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentadservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentcheckoutservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentcurrencyservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentemailservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentfrontendkubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentpaymentservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentproductcatalogservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentshippingservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentcartservicekubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentloadgeneratorkubectl--context=${CLUSTER_1_CTX}-nonline-boutiquewait--for=condition=available--timeout=5mdeploymentrecommendationserviceEnsure that there are two containers per Pod—the application containerand the Istio sidecar proxy that Istio on GKE automaticallyinjects into the Pod:
kubectl--context=${CLUSTER_1_CTX}-nonline-boutiquegetpodsThe output is similar to the following:
NAME READY STATUS RESTARTS AGEadservice-7cbc9bd9-t92k4 2/2 Running 0 3m21scartservice-d7db78c66-5qfmt 2/2 Running 1 3m23scheckoutservice-784bfc794f-j8rl5 2/2 Running 0 3m26scurrencyservice-5898885559-lkwg4 2/2 Running 0 3m23semailservice-6bd8b47657-llvgv 2/2 Running 0 3m27sfrontend-764c5c755f-9wf97 2/2 Running 0 3m25sloadgenerator-84cbcd768c-5pdbr 2/2 Running 3 3m23spaymentservice-6c676df669-s779c 2/2 Running 0 3m25sproductcatalogservice-7fcf4f8cc-hvf5x 2/2 Running 0 3m24srecommendationservice-79f5f4bbf5-6st24 2/2 Running 0 3m26sredis-cart-74594bd569-pfhkz 2/2 Running 0 3m22sshippingservice-b5879cdbf-5z7m5 2/2 Running 0 3m22s
You can also check the sidecar Envoy proxy version from any one of thePods to confirm that you have Istio on GKE v1.4 Envoy proxiesdeployed:
exportFRONTEND_POD=$(kubectlgetpod-nonline-boutique-lapp=frontend--context=${CLUSTER_1_CTX}-ojsonpath='{.items[0].metadata.name}')kubectl--context=${CLUSTER_1_CTX}getpods${FRONTEND_POD}-nonline-boutique-ojson|jq'.status.containerStatuses[].image'The output is similar to the following:
"gke.gcr.io/istio/proxyv2:1.4.10-gke.8""gcr.io/google-samples/microservices-demo/frontend:v0.3.4"
Access the application by navigating to the IP address of the
istio-ingressgatewayService IP address:kubectl--context=${CLUSTER_1_CTX}-nistio-systemgetserviceistio-ingressgateway-ojsonpath='{.status.loadBalancer.ingress[0].ip}'
Frequently asked questions
This section describes frequently asked questions and related answers aboutmigrating from Istio on GKE to Cloud Service Mesh.
Why am I being migrated from Istio on GKE to Cloud Service Mesh?
Istio on Google Kubernetes Engine is a beta feature that deploys Google-managed Istio on aGoogle Kubernetes Engine (GKE) cluster. Istio on GKE deploys anunsupported version (Istio version 1.4). To provide you with the latest servicemesh features and a supported service mesh implementation, we are upgrading allIstio on GKE users to Cloud Service Mesh.
Cloud Service Mesh is Google's managed and supported service mesh productpowered by Istio APIs. Cloud Service Mesh is to Istio what GKEis to Kubernetes. Because Cloud Service Mesh is based on Istio APIs, you cancontinue to use your Istio configurations when you migrate toCloud Service Mesh. In addition, there is no proprietary vendor lock-in.
Cloud Service Mesh provides the following benefits:
- A Google-managed and Google-supported service mesh.
- Istio APIs with no vendor lock-in.
- Out-of-the-box telemetry dashboards and SLO management withouta requirement to manage additional third-party solutions.
- Google-hosted certificate authority options.
- Integration with Google Cloud networking and Identity-Aware Proxy (IAP).
- Hybrid and multi-cloud platform support.
To learn more about Cloud Service Mesh features and capabilities, seeGoogle-managed control plane supported features.
Is there any downtime associated with this migration?
The migration script is designed to avoid downtime. The script installsCloud Service Mesh as acanary control plane alongside your existing Istio control plane. Theistio-ingressgateway isupgraded in place. You then relabel the Istio-enabled namespaces to startusing Cloud Service Mesh with Cloud Service Mesh certificate authority.
Ensure that you havePodDisruptionBudgets properly configured for your applications so that you do not experienceany application downtime. Even though you can avoid downtime, if you areperforming this migration yourself, we recommend that you perform this migrationduring a scheduled maintenance window. Google-driven migrations areperformed during aGKE maintenance window.Ensure that your GKE clusters have maintenance windows configured.
Is there any cost associated with using Cloud Service Mesh?
There are two ways to use Cloud Service Mesh on GKE:
If you are an GKE Enterprise subscriber, Cloud Service Mesh is included aspart of yourGKE Enterprise subscription.
If you are not an GKE Enterprise subscriber, you can useCloud Service Mesh as a standalone product on GKE (onGoogle Cloud). For more information, seeCloud Service Mesh pricing details.
Are there any features or configurations that are not supported in the latest version of Cloud Service Mesh?
The script checks all Istio configurations and migrates them to the latestCloud Service Mesh version. There are certain configurations that might requireadditional steps to be migrated from Istio version 1.4 to Cloud Service Meshversion 1.10. The script performs a configuration check and informs you if anyconfigurations require additional steps.
Does migrating change my current Istio configurations?
No, your Istio configurations work on Cloud Service Mesh without requiring anychanges.
After I migrate to Cloud Service Mesh, can I migrate back to Istio?
Yes, there is no commitment to use Cloud Service Mesh. You canuninstall Cloud Service Mesh and reinstall Istio at any time.
If the migration fails, is it possible to roll back?
Yes, the script lets you roll back to your previous Istio on GKEversion.
Which version of Istio can I migrate by using this script?
The script assists you in migrating from Istio on GKE version 1.4to Cloud Service Mesh version 1.10. The script validates your Istio versionduring the pre-migration stage, and informs you whether your Istio version canbe migrated.
How can I get additional help with this migration?
Our Support team is glad to help. You can open asupport case from the Google Cloud console. To learn more, seeManaging support cases.
What happens if I don't migrate to Cloud Service Mesh?
Your Istio components continue to work, but Google no longer manages your Istioinstallation. You no longer receive automatic updates, and the installation isnot guaranteed to work as the Kubernetes cluster version updates.
For more information, seeIstio support.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.