Configuration updates for modernization

This document describes configuration updates you may need to make to yourmanaged Cloud Service Mesh before modernizing your mesh totheTRAFFIC_DIRECTOR control plane from theISTIOD control plane.

The following are a list of possible configuration updates necessary to prepareyour cluster for modernization. See each section for update instructions:

For more information on the modernization workflow, see theManaged control plane modernization page.

Migrate from Istio secrets to multicluster_mode

Multi-cluster secrets are not supported when a cluster is using theTRAFFIC_DIRECTOR control plane. This document describes how youcan modernize from using Istio multi-cluster secrets to usingmulticluster_mode.

Istio secrets versus declarative API overview

Open source istio multi-cluster endpoint discovery works byusingistioctl or other tools to create aKubernetes Secret in acluster. This secret allows a cluster to load balance traffic to another clusterin the mesh. TheISTIOD control plane then reads thissecret and begins routing traffic to that other cluster.

Cloud Service Mesh has adeclarative APIto controlmulti-cluster traffic instead of directly creating Istio secrets. This APItreats Istio secrets as an implementation detail and is more reliablethan creating Istio secrets manually. Future Cloud Service Mesh features willdepend on the declarative API, and you won't be able to use those newfeatures with Istio secrets directly. The declarative API is the onlysupported path forward.

If you are using Istio Secrets, migrate to using the declarative API assoon as possible. Note that themulticluster_mode setting directs each clusterto direct traffic to every other cluster in the mesh. Using secrets allows amore flexible configuration, letting you configure for each cluster which othercluster it should direct traffic to in the mesh.For a full list of the differences between the supportedfeatures of the declarative API and Istio secrets, seeSupported features using Istio APIs.

Important: With the declarative API, an entire cluster is opted into endpointdiscovery at a time. This means that every cluster with`multicluster_mode=connected` will discover endpoints for every othercluster in the fleet that also has `multicluster_mode=connected`.

Migrate from Istio secrets to declarative API

If you provisioned Cloud Service Mesh using automatic management with thefleet feature API, you don'tneed to follow these instructions.These steps only apply if you onboarded usingasmcli --managed.

Note, this process changes secrets that point to a cluster. During this process,the endpoints are removed and then re-added. In between the endpointsbeing removed and added, the traffic willbriefly revert to routing locally instead of load balancing to other clusters.For more information, see theGitHub issue.

To move from using Istio secrets to the declarative API, follow these steps.Execute these steps at the same time or in close succession:

  1. Enable the declarative API for each cluster in the fleet where you want toenable multi cluster endpoint discovery by settingmulticluster_mode=connected. Note that you need to explicitly setmulticluster_mode=disconnected if you don't want the cluster to bediscoverable.

    Use the following command to opt in a cluster for multi cluster endpointdiscovery:

     kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"connected"}}'

    Use the following command to opt a cluster out of endpoint discovery:

     kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"disconnected"}}'
  2. Delete old secrets.

    After settingmulticluster_mode=connected on your clusters, eachcluster will have a new secret generated for every other cluster that alsohasmulticluster_mode=connected set.The secret is placed in the istio-system namespace and have the followingformat:

    istio-remote-secret-projects-PROJECT_NAME-locations-LOCATION-memberships-MEMBERSHIPS

    Each secret will also have the labelistio.io/owned-by: mesh.googleapis.com applied.

    Once the new secrets are created, you can delete any secrets manuallycreated withistioctl create-remote-secret:

    kubectldeletesecretSECRET_NAME-nistio-system

Once migrated, check your request metrics to make sure they're routed asexpected.

Enable Workload Identity Federation for GKE

Workload Identity Federation is the recommended secure method for Google Kubernetes Engineworkloads. This allows access to Google Cloud services such as Compute Engine,BigQuery, and Machine Learning APIs. Workload Identity Federation doesn'trequire manual configuration or less secure methods like service account keyfiles because it uses IAM policies. For more details onWorkload Identity Federation, seeHow Workload Identity Federation for GKE works.

The following section describe how to enable Workload Identity Federation.

Enable Workload Identity Federation on clusters

  1. Check Workload Identity Federation is enabled for your cluster. To do that,ensure the GKE cluster has a Workload Identity Federation pool setconfigured, which is essential for IAM credential validation.

    Use the following command, to check the workload identity pool set for acluster:

    gcloudcontainerclustersdescribeCLUSTER_NAME\--format="value(workloadIdentityConfig.workloadPool)"

    ReplaceCLUSTER_NAME with the name of your GKE cluster.If you haven't already specifieda default zone or region for gcloud,you might also need to specify a--region or--zone flag when runningthis command.

  2. If the output is empty, follow the instructions inUpdate an existingcluster to enable workload identity on existing GKE clusters.

Enable Workload Identity Federation on node pools

After Workload Identity Federation is enabled on a cluster,node poolsmust be configured to use theGKE metadata server.

  1. List all the node pools of a Standard cluster. Run thegcloud containernode-pools list command:

    gcloudcontainernode-poolslist--clusterCLUSTER_NAME

    ReplaceCLUSTER_NAME with the name of your GKE cluster.If you haven't already specifieda default zone or region for gcloud,you might also need to specify a--region or--zone flag when runningthis command.

  2. Verify that each node pool is using the GKE metadata server:

    gcloudcontainernode-poolsdescribeNODEPOOL_NAME\--cluster=CLUSTER_NAME\--format="value(config.workloadMetadataConfig.mode)"

    Replace the following:

    • NODEPOOL_NAME with the name of your nodepool.
    • CLUSTER_NAME with the name of your GKE cluster.
  3. If the output doesn't containGKE_METADATA, update the node pool using theUpdate an existing node pool guide.

Enable managed container network interface (CNI)

This section guides you through enabling managed CNI for Cloud Service Meshon Google Kubernetes Engine.

Managed CNI overview

Managed container network interface (CNI) is a Google-managed implementation ofthe Istio CNI. TheCNIpluginstreamlines pod networking by configuring iptables rules. This enables trafficredirection between applications and Envoy proxies, eliminating the need forprivileged permissions for the init-container required to manageiptables.

TheIstio CNI pluginreplaces theistio-init container. Theistio-init container was previouslyresponsible for setting up the pod's network environment to enable trafficinterception for the Istio sidecar. The CNI plugin performs the same networkredirect function, but with the added benefit of reducing the need forelevated privileges, thereby enhancing security.

Therefore, for enhanced security and reliability, and to simplify management andtroubleshooting, managed CNI is required across all Managed Cloud Service Meshdeployments.

Impact on init containers

Init containers are specialized containers that run before applicationcontainers for setup tasks. Setup tasks can include tasks such as downloadingconfiguration files, communicating with external services, or performingpre-application initialization. Init containers that rely on network accessmight encounter issues when managed CNI is enabled in the cluster.

The pod setup process with managed CNI is as follows:

  1. The CNI plugin sets up pod network interfaces, assigns pod IPs and redirectstraffic to the Istio sidecar proxy which hasn't started yet.
  2. All init containers execute and complete.
  3. The Istio sidecar proxy starts alongside the application containers.

Therefore, if an init container attempts to make outbound network connectionsor connect to services within the mesh, the network requestsfrom the init containers may be dropped or misrouted. This is because the Istiosidecar proxy, which manages network traffic for the pod, is not running when therequests are made. For more details, referIstio CNIdocumentation.

Enable managed CNI for your cluster

Follow the steps in this section to enable managed CNI on your cluster.

  1. Remove network dependencies from your init container. Consider thefollowing alternatives:

    • Modify application logic or containers: You can modify your services toremove the dependency on init containers that require network requests orperform network operations within your application containers, after thesidecar proxy has started.
    • Use Kubernetes ConfigMaps or secrets: Store configuration data fetchedby the network request in Kubernetes ConfigMaps or secrets and mount theminto your application containers.For alternative solutions, refer to theIstio documentation.
  2. Enable managed CNI on your cluster:

    1. Make the following configuration changes:

      1. Run the following command to locate thecontrolPlaneRevision.

        kubectlgetcontrolplanerevision-nistio-system
      2. In yourControlPlaneRevision (CPR) custom resource (CR), set the labelmesh.cloud.google.com/managed-cni-enabled totrue.

        kubectllabelcontrolplanerevisionCPR_NAME\-nistio-systemmesh.cloud.google.com/managed-cni-enabled=true\--overwrite

        ReplaceCPR_NAME with the value under the NAME column from theoutput of the previous step.

      3. In the asm-options ConfigMap, set theASM_OPTS value toCNI=on.

        kubectlpatchconfigmapasm-options-nistio-system\-p'{"data":{"ASM_OPTS":"CNI=on"}}'
      4. In yourControlPlaneRevision (CPR) custom resource (CR), set the labelmesh.cloud.google.com/force-reprovision totrue.This action triggers control plane restart.

        Note: This method is not the recommended method for restarting thecontrol plane, and should only be used for Cloud Service Meshmodernization efforts.
        kubectllabelcontrolplanerevisionCPR_NAME\-nistio-systemmesh.cloud.google.com/force-reprovision=true\--overwrite
    2. Check the feature state. Retrieve the feature state using the following command:

      gcloudcontainerfleetmeshdescribe--projectFLEET_PROJECT_ID

      ReplaceFLEET_PROJECT_ID with the ID of your Fleet Host project. Generally,theFLEET_PROJECT_ID has the same name as the project.

      • Verify that theMANAGED_CNI_NOT_ENABLED condition is removed fromservicemesh.conditions.
      • Note, it may take up to 15-20 minutes for the state to update. Trywaiting a few minutes and re-run the command.
    3. Once thecontrolPlaneManagement.state isActive in the cluster'sfeature state,restart the pods.

Move away from Non-Standard Binary Usage in Sidecar

This section suggest ways to make your deployments compatible with thedistroless envoy proxy image.

Distroless envoy proxy sidecar images

Cloud Service Mesh uses two types of Envoy proxy sidecar images based onyourcontrol plane configuration,Ubuntu-based image containing various binaries and Distrolessimage. Distroless base images are minimal container images that prioritizesecurity and resource optimization by only including essential components. Theattack surface is reduced to help prevent vulnerabilities. For more information,refer to the documentation onDistroless proxy image.

Binary compatibility

As a best practice, you should restrict the contents of a container runtime toonly the necessary packages. This approach improves security and thesignal-to-noise ratio of Common Vulnerabilities and Exposures (CVE) scanners.Distroless Sidecar image has a minimal set of dependencies, stripped of allnon-essential executables, libraries, and debugging tools. It is therefore notpossible to execute a shell command or use curl, ping, or other debug utilitieslikekubectl exec inside the container.

Make clusters compatible with distroless images

  • Remove references to any unsupported binaries (like bash or curl) from yourconfiguration. Particularly inside Readiness, Startup, and Livenessprobes,and Lifecycle PostStart and PreStophookswithin the istio-proxy, istio-init, or istio-validation containers.
  • Consider alternatives likeholdApplicationUntilProxyStartsfor certain use cases.
  • For debugging, you can useephemeral containersto attach to a running workload Pod. You can then inspect it and run customcommands. For an example, seeCollecting Cloud Service Meshlogs.

If you can't find a solution for your specific use case, contact Google CloudSupport atGetting support.

Migrate to the Istio Ingress Gateway

This section shows you how to migrate to the Istio Ingress Gateway. There aretwo methods for migrating to the Istio Ingress Gateway:

  1. Phased Migration with Traffic Splitting

    This method prioritizes minimizing disruption in which you'll incrementallysend traffic to the new Istio gateway, allowing you to monitor itsperformance on a small percentage of requests and quickly revert if necessary.Keep in mind that configuring Layer 7 traffic splitting can be challengingfor some applications, so you need to manage both gateway systemsconcurrently during the transition. SeePhased Migration with traffic splittingfor the steps.

  2. Direct Migration

    This method involves simultaneously rerouting all traffic to the new Istiogateway once you have thoroughly conducted testing. The advantage of thisapproach is complete separation from the old gateway's infrastructure,allowing adaptable configuration of the new gateway without the constraintsof the existing setup. However, there is an increased risk of downtime incase unexpected problems arise with the new gateway during the transition.SeeDirect Migration for the steps.

The following migration examples assume you have an HTTP service (httpbin)running in the application namespace (default) and exposed externally usingKubernetes Gateway API. The relevant configurations are:

  • Gateway:k8-api-gateway (inistio-ingress namespace) - configured tolisten for HTTP traffic on port 80 for any hostname ending with.example.com.
  • HTTPRoute:httpbin-route (indefault namespace) - directs any HTTP requestwith the hostnamehttpbin.example.com and a path starting with/get to thehttpbin service within thedefault namespace.
  • The httpbin application is accessible using the external IP 34.57.246.68.

Basic gateway diagram

Phased Migration with traffic splitting

Provision a new Istio Ingress Gateway

  1. Deploy a new Ingress Gateway following the steps in theDeploy sample gateway sectionand customize the sample configurations to your requirements. The samples intheanthos-service-meshrepository are meant for deploying aistio-ingressgateway loadBalancerservice and the correspondingingress-gateway pods.

    Example Gateway Resource (istio-ingressgateway.yaml)

    apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:istio-api-gatewaynamespace:GATEWAY_NAMESPACEspec:selector:istio:ingressgateway# The selector should match the ingress-gateway pod labels.servers:-port:number:80name:httpprotocol:HTTPhosts:# or specific hostnames if needed-"httpbin.example.com"
  2. Apply theGatewayconfiguration to manage traffic:

    kubectlapply-fistio-ingressgateway.yaml-nGATEWAY_NAMESPACE

    Ensure the 'spec.selector' in your Gateway resource matches the labels ofyouringress-gateway pods. For example, if theingress-gateway pods havethe labelistio=ingressgateway, your Gateway configuration must also selectthis theistio=ingressgateway label.

Configure initial routing for the new Gateway

  1. Define the initial routing rules for your application using an IstioVirtualService.

    Example VirtualService (my-app-vs-new.yaml):

    apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-vsnamespace:APPLICATION_NAMESPACEspec:gateways:-istio-ingress/istio-api-gateway# Replace with <gateway-namespace/gateway-name>hosts:-httpbin.example.comhttp:-match:-uri:prefix:/getroute:-destination:host:httpbinport:number:8000
  2. Apply the VirtualService:

    kubectlapply-fmy-app-vs-new.yaml-nMY_APP_NAMESPACE

Access the backend (httpbin) service through the newly deployed Istio Ingress Gateway

  1. Set the Ingress Host environment variable to the external IP addressassociated with the recently deployedistio-ingressgateway load balancer:

    exportINGRESS_HOST=$(kubectl-nGATEWAY_NAMESPACEgetserviceistio-ingressgateway-ojsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. Verify the application (httpbin) is accessible using the new gateway:

    curl-s-I-HHost:httpbin.example.com"http://$INGRESS_HOST/get"

    The output is similar to:

    HTTP/1.1200OK

Request flow with the new istio ingress gateway

Modify existing Ingress for traffic splitting

After confirming the successful setup of the new gateway (ex. istio-api-gateway),you can begin routing a portion of your traffic through it. To do this, updateyour currentHTTPRouteto direct a small percentage of traffic to the new gateway, while the largerportion continues to use the existing gateway (k8-api-gateway).

  1. Open the httproute for editing:

    kubectledithttproutehttpbin-route-nMY_APP_NAMESPACE
  2. Add a new backend reference pointing to the new Ingress Gateway'sloadbalancer service with an initial weight of 10% and update the weight forthe old gateway's backend.

    apiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:httpbin-routenamespace:MY_APP_NAMESPACE# your application's namespacespec:parentRefs:-name:k8-api-gatewaynamespace:istio-ingresshostnames:["httpbin.example.com"]rules:-matches:-path:type:PathPrefixvalue:/getbackendRefs:-name:httpbinport:8000weight:90-name:istio-ingressgateway# Newly deployed load balancer servicenamespace:GATEWAY_NAMESPACEport:80weight:10
  3. Grant Permission for Cross-Namespace Referencing with reference grant.

    To allow yourHTTPRoute in the application namespace (default) to accessloadbalancer service in gateway namespace (istio-ingress), you may need tocreate areference grant.This resource serves as a security control, explicitly defining whichcross-namespace references are permitted.

    The followingistio-ingress-grant.yaml describes an example reference grant:

    apiVersion:gateway.networking.k8s.io/v1beta1kind:ReferenceGrantmetadata:name:istio-ingressgateway-grantnamespace:istio-ingress# Namespace of the referenced resourcespec:from:-group:gateway.networking.k8s.iokind:HTTPRoutenamespace:MY_APP_NAMESPACE# Namespace of the referencing resourceto:-group:""# Core Kubernetes API group for Serviceskind:Servicename:istio-ingressgateway# Loadbalancer Service of the new ingress gateway
  4. Apply the reference grant:

    kubectlapply-fistio-ingress-grant.yaml-nGATEWAY_NAMESPACE
  5. Verify requests to existing external IP address (ex. 34.57.246.68)are notfailing. The followingcheck-traffic-flow.sh describes a script to checkrequest failures:

    # Update the following values based on your application setupexternal_ip="34.57.246.68"# Replace with existing external IPurl="http://$external_ip/get"host_name="httpbin.example.com"# Counter for successful requestssuccess_count=0# Loop 50 timesforiin{1..50};do# Perform the curl request and capture the status codestatus_code=$(curl-s-HHost:"$host_name"-o/dev/null-w"%{http_code}""$url")# Check if the request was successful (status code 200)if["$status_code"-eq200];then((success_count++))# Increment the success counterelseecho"Request$i: Failed with status code$status_code"fidone# After the loop, check if all requests were successfulif["$success_count"-eq50];thenecho"All 50 requests were successful!"elseecho"Some requests failed.  Successful requests:$success_count"fi
  6. Execute the script to confirm that no requests fail, regardless of thetraffic route:

    chmod+xcheck-traffic-flow.sh./check-traffic-flow.sh

Request flow with traffic split between existing gateway and new istio ingress gateway

Slowly increase traffic percentage

If no request failures are seen for the existing external IP address (forexample,34.57.246.68), gradually shift more traffic to the new Istio IngressGateway by adjusting the backend weights in yourHTTPRoute. Increase theweight for theistio-ingressgateway and decrease the weight for the oldgateway in small increments such as 10%, 20%, and on.

Important: Continuously observe key metrics such as request success rate,latency, error rates, and the resource utilization of your application pods toensure stability at each increment.

Use the following command to update your existingHTTPRoute:

kubectledithttproutehttpbin-route-nMY_APP_NAMESPACE

Full traffic migration and removing the old gateway

  1. When the new Istio Ingress Gateway demonstrates stable performance andsuccessful request handling, shift all traffic to it. Update yourHTTPRouteto set the old gateway's backend weight to0 and the new gateway's to100.

  2. Once traffic is fully routed to the new gateway, update your external DNSrecords for your application's hostname (for example,httpbin.example.com)to point to the external IP address of the load balancer service created inProvision a new Istio Ingress Gateway.

  3. Finally, delete the old gateway and its associated resources:

    kubectldeletegatewayOLD_GATEWAY-nGATEWAY_NAMESPACEkubectldeleteserviceOLD_GATEWAY_SERVICE-nGATEWAY_NAMESPACE

Direct Migration

Provision a new Istio Ingress Gateway

  1. Deploy a new Ingress Gateway following the steps in theDeploy sample gateway sectionand customize the sample configurations to your requirements. The samples intheanthos-service-meshrepository are meant for deploying aistio-ingressgateway loadBalancerservice and the correspondingingress-gateway pods.

    Example Gateway Resource (istio-ingressgateway.yaml)

    apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:istio-api-gatewaynamespace:GATEWAY_NAMESPACEspec:selector:istio:ingressgateway# The selector should match the ingress-gateway pod labels.servers:-port:number:80name:httpprotocol:HTTPhosts:# or specific hostnames if needed-"httpbin.example.com"
  2. Apply theGatewayconfiguration to manage traffic:

    kubectlapply-fistio-ingressgateway.yaml-nGATEWAY_NAMESPACE

    Ensure the 'spec.selector' in your Gateway resource matches the labels ofyouringress-gateway pods. For example, if theingress-gateway pods havethe labelistio=ingressgateway, your Gateway configuration must also selectthis theistio=ingressgateway label.

Configure initial routing for the new Gateway

  1. Define the initial routing rules for your application using an IstioVirtualService.

    Example VirtualService (my-app-vs-new.yaml):

    apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-vsnamespace:APPLICATION_NAMESPACEspec:gateways:-istio-ingress/istio-api-gateway# Replace with <gateway-namespace/gateway-name>hosts:-httpbin.example.comhttp:-match:-uri:prefix:/getroute:-destination:host:httpbinport:number:8000
  2. Apply the VirtualService:

    kubectlapply-fmy-app-vs-new.yaml-nMY_APP_NAMESPACE

Access the backend (httpbin) service through the newly deployed Istio Ingress Gateway

  1. Set the Ingress Host environment variable to the external IP addressassociated with the recently deployedistio-ingressgateway load balancer:

    exportINGRESS_HOST=$(kubectl-nGATEWAY_NAMESPACEgetserviceistio-ingressgateway-ojsonpath='{.status.loadBalancer.ingress[0].ip}')
  2. Verify the application (httpbin) is accessible using the new gateway:

    curl-s-I-HHost:httpbin.example.com"http://$INGRESS_HOST/get"

    The output is similar to:

    HTTP/1.1200OK

Request flow with the new istio ingress gateway

Test and Monitor new gateway

  1. Test all routing rules, validate TLS configuration, security policies, andother features. Perform load testing to verify the new gateway can handleexpected traffic.

  2. Once the new gateway is fully tested, update your external DNS records foryour application's hostname (for example,httpbin.example.com) to point tothe external IP address of the load balancer service created inProvision a new Istio Ingress Gateway.

  3. Monitor key metrics such as request success rate, latency, error rates, andthe resource utilization of your application pods to verify stability withthe new Istio Ingress Gateway. Once stable, you can delete the old gatewayand its associated resources

    kubectldeletegatewayOLD_GATEWAY-nGATEWAY_NAMESPACEkubectldeleteserviceOLD_GATEWAY_SERVICE-nGATEWAY_NAMESPACE

Important Considerations: Ensure TLS certificates and configurations arecorrectly set up on the new Istio Ingress Gateway if your application requiresHTTPS. SeeSet up TLS termination in ingress gatewayfor more details.

Fix multiple control planes

Cloud Service Mesh previously supportedonboarding usingasmcli (deprecated)which did not block provisioning multiple control planes. Cloud Service Meshnow enforces the best practice of deploying only one channel per cluster thatmatches the cluster channel and does not support using multiple deployedchannels in the same cluster.

If you want canary deployments on new versions of mesh in rapid before theybecome available in stable or regular, you need to use two different clusterswhere each has a separate channel. Note that channels are controlled by theGKE Cluster Channel, and Mesh does not have a separatechannel associated with it.

You can check if you have multiple channels by looking for theUNSUPPORTED_MULTIPLE_CONTROL_PLANES status condition on your membership. Ifthis warning does not appear, then you are not impacted and may skip readingthis section.

  1. Run the following command to check if your cluster has multiple control planechannels:

    gcloudcontainerfleetmeshdescribe

    The output is similar to:

    ...projects/.../locations/global/memberships/my-membership:    servicemesh:      conditions:      - code:UNSUPPORTED_MULTIPLE_CONTROL_PLANES        details: 'Using multiple control planes is not supported. Please remove a control plane from your cluster.'        documentationLink: https://cloud.google.com/service-mesh/docs/migrate/modernization-configuration-updates#multiple_control_planes        severity: WARNING      controlPlaneManagement:        details:        - code: REVISION_READY          details: 'Ready: asm-managed-stable'        implementation: ISTIOD        state: ACTIVE...
  2. If theUNSUPPORTED_MULTIPLE_CONTROL_PLANES condition appeared, determinewhich channels exist for your cluster:

    kubectlgetcontrolplanerevisions-nistio-system

    The output is similar to:

    NAME                 RECONCILED   STALLED   AGEasm-managed-stable   True         False     97dasm-managed          True         False     97dasm-managed-rapid    True         False     97d

    In this example, all three channels were provisioned:

    • asm-managed-stable -> STABLE
    • asm-managed -> REGULAR
    • asm-managed-rapid -> RAPID

    If only 1 result appears, then only one channel is provisioned on yourcluster, and you can skip the rest of these steps.

    If 2 or more results appear, then follow the rest of these steps to removethe surplus channels.

Consolidate workloads to one channel

Before you can remove additional channels, you must ensure your workloads areonly using a single channel.

  1. Find all the labels you are using in your cluster:

    kubectlgetnamespaces-listio.io/rev=RELEASE_CHANNEL

    Depending on the output from the previous command, replaceRELEASE_CHANNEL withasm-managed-stable,asm-managed, orasm-managed-rapid. Repeat this step for each provisioned channel.

    The output is similar to:

    NAME      STATUS   AGEdefault   Active   110d

    Note that in this example, the default namespace is being injected with theregular channel.

    If all of your workloads are already using the same channel, you can skipto stepRemove the extra channels. Otherwise,continue in this section.

  2. Change the labels so that only one channel is being used:

    Caution: Consider the following before changing labels.
    • Pods can also be injected directly in some cases with thesidecar.istio.io/inject label. Make sure to check those for that usageas well.
    • You can ignoreistio-injection=enabled labels for this step. Namespaceswith that label will automatically change to match whichever channel isleft in the cluster.
    • When selecting a channel to keep, try to select the channel that is thesame as your GKE Cluster Channel. If this channeldoes not exist, then select any one of the active channels.
    • The actual channel you select does not matter. The GKECluster Channel determines which version of mesh you get, not the Meshchannel.
    • Check your meshconfig configuration between any active channels that arein use to make sure there are no differences between them. Each channeluses a separate configmap for configuration, so consolidating two channelsdown to one should ensure consistent behavior between the two channels.kubectl get configmap istio-asm-managed{-rapid | -stable} -n istio-system -o yaml
    kubectllabelnamespaceNAMESPACEistio.io/rev-istio-injection=enabled--overwrite

    ReplaceNAMESPACE with the name of your namespace.

    The best practice is to useistio-injection=enabled. However, if you don'twant to use that label, then you can also useistio.io/rev=RELEASE_CHANNEL.

    Once you have changed the label for a namespace / pod, you must restart allworkloads so that they are injected by the correct control plane.

Remove the extra channels

Once you have verified that all of your workloads are running on a singlechannel, you can remove the unused, extra channels. If all three releasechannels were provisioned, remember to run the following commands for eachchannel.

  1. Delete the extraControlPlaneRevision resource:

    kubectldeletecontrolplanerevisionRELEASE_CHANNEL-nistio-system

    ReplaceRELEASE_CHANNEL withasm-managed-stable,asm-managed, orasm-managed-rapid.

  2. Delete theMutatingWebhookConfiguration:

    kubectldeletemutatingwebhookconfigurationistiod-RELEASE_CHANNEL
  3. Delete themeshconfig configmap:

    kubectldeleteconfigmapistio-RELEASE_CHANNEL

Enable automatic management

Caution: Enabling automatic control plane management automatically enablesmulti-cluster and managed data plane on that cluster.
  1. Run the following command to enable automatic management:

    gcloudcontainerfleetmeshupdate\--managementautomatic\--membershipsMEMBERSHIP_NAME\--projectPROJECT_ID\--locationMEMBERSHIP_LOCATION

    Replace the following:

    • MEMBERSHIP_NAME is the membership name listed when you verifiedthat your cluster was registered to the fleet.
    • PROJECT_ID is the project ID of your project.
    • MEMBERSHIP_LOCATION is the location of your membership (either aregion, orglobal). You can check your membership's location withgcloud container fleet memberships list --projectPROJECT_ID.
  2. Verify that automatic management is enabled:

    gcloudcontainerfleetmeshdescribe

    The output is similar to:

    ...membershipSpecs:  projects/.../locations/us-central1/memberships/my-member:    mesh:      management:MANAGEMENT_AUTOMATICmembershipStates:  projects/.../locations/us-central1/memberships/my-member:    servicemesh:      conditions:      - code: VPCSC_GA_SUPPORTED        details: This control plane supports VPC-SC GA.        documentationLink: http://cloud.google.com/service-mesh/docs/managed/vpc-sc        severity: INFO      controlPlaneManagement:        details:        - code: REVISION_READY          details: 'Ready: asm-managed'        implementation: TRAFFIC_DIRECTOR        state: ACTIVE      dataPlaneManagement:        details:        - code: OK          details: Service is running.        state: ACTIVE    state:      code: OK      description: |-        Revision ready for use: asm-managed.        All Canonical Services have been reconciled successfully....

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.