Set up a multi-cluster mesh on managed Cloud Service Mesh

Note: This guide only supports Cloud Service Mesh with Istio APIs and doesnot support Google Cloud APIs. For more information see,Cloud Service Mesh overview.

This guide explains how to join two clusters into a single Cloud Service Mesh usingMesh CA orCertificate Authority Serviceand enable cross-cluster load balancing. You can easily extendthis process to incorporate any number of clusters into your mesh.

A multi-cluster Cloud Service Mesh configuration can solve several crucial enterprisescenarios, such as scale, location, and isolation. For more information, seeMulti-cluster use cases.

Prerequisites

This guide assumes that you have two or more Google CloudGKE clusters that meet the following requirements:

  • Cloud Service Mesh installed on the clusters. You needasmcli, theistioctl tool, and samples thatasmcli downloads to thedirectory that you specified in--output_dir.
  • Clusters in your mesh must have connectivity between all pods before youconfigure Cloud Service Mesh. Additionally, if you join clusters that are not inthe same project, they must be registered to the samefleet host project,and the clusters must be in ashared VPC configurationtogether on the same network. We also recommend that you have one project tohost the Shared VPC, and two service projects for creating clusters. For moreinformation, seeSetting up clusters with Shared VPC.
  • If you use Certificate Authority Service, all clusters must have their respective subordinate CA pools chain to the same root CA pool. Otherwise, all of them will need to use the same CA pool.

Setting project and cluster variables

  1. Create the following environment variables for the project ID, clusterzone or region, cluster name, and context.

    exportPROJECT_1=PROJECT_ID_1exportLOCATION_1=CLUSTER_LOCATION_1exportCLUSTER_1=CLUSTER_NAME_1exportCTX_1="gke_${PROJECT_1}_${LOCATION_1}_${CLUSTER_1}"exportPROJECT_2=PROJECT_ID_2exportLOCATION_2=CLUSTER_LOCATION_2exportCLUSTER_2=CLUSTER_NAME_2exportCTX_2="gke_${PROJECT_2}_${LOCATION_2}_${CLUSTER_2}"
  2. If these are newly created clusters, ensure to fetch credentials for eachcluster with the followinggcloud commands otherwise their associatedcontext will not be available for use in the next steps of this guide.

    The commands depend on your cluster type, either regional or zonal:

    Regional

    gcloud container clusters get-credentials ${CLUSTER_1} --region ${LOCATION_1}gcloud container clusters get-credentials ${CLUSTER_2} --region ${LOCATION_2}

    Zonal

    gcloud container clusters get-credentials ${CLUSTER_1} --zone ${LOCATION_1}gcloud container clusters get-credentials ${CLUSTER_2} --zone ${LOCATION_2}

Create firewall rule

In some cases, you need to create a firewall rule to allow cross-clustertraffic. For example, you need to create a firewall rule if:

  • You use different subnets for the clusters in your mesh.
  • Your Pods open ports other than 443 and 15002.

GKE automatically adds firewall rules to each node to allowtraffic within the same subnet. If your mesh contains multiple subnets, you mustexplicitly set up the firewall rules to allow cross-subnet traffic. You mustadd a new firewall rulefor each subnet to allow the source IP CIDR blocks and targets ports of all theincoming traffic.

The following instructions allow communication between all clusters in yourproject or only between$CLUSTER_1 and$CLUSTER_2.

  1. Gather information about your clusters' network.

    All project clusters

    If the clusters are in the same project, you can use the following commandto allow communication between all clusters in your project. If there areclusters in your project that you don't want to expose, use the command intheSpecific clusters tab.

    functionjoin_by{localIFS="$1";shift;echo"$*";}ALL_CLUSTER_CIDRS=$(gcloudcontainerclusterslist--project$PROJECT_1--format='value(clusterIpv4Cidr)'|sort|uniq)ALL_CLUSTER_CIDRS=$(join_by,$(echo"${ALL_CLUSTER_CIDRS}"))ALL_CLUSTER_NETTAGS=$(gcloudcomputeinstanceslist--project$PROJECT_1--format='value(tags.items.[0])'|sort|uniq)ALL_CLUSTER_NETTAGS=$(join_by,$(echo"${ALL_CLUSTER_NETTAGS}"))

    Specific clusters

    The following command allows communication between$CLUSTER_1 and$CLUSTER_2 and doesn't expose other clusters in your project.

    functionjoin_by{localIFS="$1";shift;echo"$*";}ALL_CLUSTER_CIDRS=$(forPin$PROJECT_1$PROJECT_2;dogcloud--project$Pcontainerclusterslist--filter="name:($CLUSTER_1,$CLUSTER_2)"--format='value(clusterIpv4Cidr)';done|sort|uniq)ALL_CLUSTER_CIDRS=$(join_by,$(echo"${ALL_CLUSTER_CIDRS}"))ALL_CLUSTER_NETTAGS=$(forPin$PROJECT_1$PROJECT_2;dogcloud--project$Pcomputeinstanceslist--filter="name:($CLUSTER_1,$CLUSTER_2)"--format='value(tags.items.[0])';done|sort|uniq)ALL_CLUSTER_NETTAGS=$(join_by,$(echo"${ALL_CLUSTER_NETTAGS}"))
  2. Create the firewall rule.

    GKE

    gcloudcomputefirewall-rulescreateistio-multicluster-pods\--allow=tcp,udp,icmp,esp,ah,sctp\--direction=INGRESS\--priority=900\--source-ranges="${ALL_CLUSTER_CIDRS}"\--target-tags="${ALL_CLUSTER_NETTAGS}"--quiet\--network=YOUR_NETWORK

    Autopilot

    TAGS=""forCLUSTERin${CLUSTER_1}${CLUSTER_2}doTAGS+=$(gcloudcomputefirewall-ruleslist--filter="Name:$CLUSTER*"--format="value(targetTags)"|uniq) &&TAGS+=","doneTAGS=${TAGS::-1}echo"Network tags for pod ranges are$TAGS"gcloudcomputefirewall-rulescreateasm-multicluster-pods\--allow=tcp,udp,icmp,esp,ah,sctp\--network=gke-cluster-vpc\--direction=INGRESS\--priority=900--network=VPC_NAME\--source-ranges="${ALL_CLUSTER_CIDRS}"\--target-tags=$TAGS

Configure endpoint discovery

Note: For more information on endpoint discovery, refer toEndpoint discovery with multiple control planes.Warning: If endpoint discovery is enabled between clusters, interclusterservices are not able to communicate with each other without proper DNSconfiguration. This is because GKE services depend on eachservice's fully qualified domain name (FQDN) for traffic routing. You can enableDNS proxy,to allow intercluster discovery. DNS proxy captures all DNS requests andresolves them using information from the Control Plane.

Enable endpoint discovery between public or private clusters with declarative API

Enabling managed Cloud Service Mesh with the fleet API will enable endpoint discoveryfor this cluster. If you provisioned managed Cloud Service Mesh with a different tool, you can manually enable endpoint discovery across public orprivate clusters in a fleet by applying the config"multicluster_mode":"connected" in theasm-options configmap. Clusters with this config enabled in the same fleetwill have cross-cluster service discovery automatically enabled between eachother.

Warning: Don't use this method if your clusters use existing in-clusterremote secrets.

This is the only way to configure multi-cluster endpoint discovery if you have theManaged (TD)control plane implementation,and the recommended way to configure it if you have the Managed (Istiod) implementation.

Before proceeding, you must havecreated a firewall rule.

Enable

If theasm-options configmapalready exists in your cluster, then enableendpoint discovery for the cluster:

      kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"connected"}}'

If theasm-options configmapdoesn't yet exist in your cluster, thencreate it with the associated data and enable endpoint discovery for thecluster:

      kubectl --context ${CTX_1} create configmap asm-options -n istio-system --from-file <(echo '{"data":{"multicluster_mode":"connected"}}')

Disable

Disable endpoint discovery for a cluster:

      kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"manual"}}'

If you unregister a cluster from the fleet without disabling endpoint discovery,secrets could remain in the cluster. You must manually clean up any remainingsecrets.

  1. Run the following command to find secrets requiring cleanup:

    kubectl get secrets -n istio-system -l istio.io/owned-by=mesh.googleapis.com,istio/multiCluster=true
  2. Delete each secret:

    kubectl delete secretSECRET_NAME

    Repeat this step for each remaining secret.

Verify multi-cluster connectivity

This section explains how to deploy the sampleHelloWorld andSleep servicesto your multi-cluster environment to verify that cross-cluster loadbalancing works.

Note: These sample services are located in theIstiosamples directory included in theistioctl tar file (located inOUTPUT_DIR/istio-${ASM_VERSION%+*}/samples) and not thesamples directory downloaded byasmcli (located inOUTPUT_DIR/samples).

Set variable for samples directory

  1. Navigate to whereasmcli was downloaded, and run the following command to setASM_VERSION

    exportASM_VERSION="$(./asmcli--version)"
  2. Set a working folder to the samples that you use to verify thatcross-cluster load balancing works. The samples are located in asubdirectory in the--output_dir directory that you specified in theasmcli install command. In the following command, changeOUTPUT_DIR to the directory that you specified in--output_dir.

    exportSAMPLES_DIR=OUTPUT_DIR/istio-${ASM_VERSION%+*}

Enable sidecar injection

  1. Create the sample namespace in each cluster.

    forCTXin${CTX_1}${CTX_2}dokubectlcreate--context=${CTX}namespacesampledone
  2. Enable the namespace for injection. The steps depend on yourcontrol plane implementation.

    Managed (TD)

    1. Apply the default injection label to the namespace:
    forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio.io/rev-istio-injection=enabled--overwritedone

    Managed (Istiod)

    Recommended: Run the following command to apply the default injection label to the namespace:

    forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio.io/rev-istio-injection=enabled--overwritedone

    If you are an existing user with the Managed Istiod control plane:We recommend that you use default injection, but revision-based injection issupported. Use the following instructions:

    1. Run the following command to locate the available release channels:

      kubectl-nistio-systemgetcontrolplanerevision

      The output is similar to the following:

      NAME                AGEasm-managed-rapid   6d7h
      Note: If two control plane revisions appear in the earlier list, remove one. Having multiple control plane channels in the cluster is not supported.

      In the output, the value under theNAME column is the revision label that corresponds to the availablerelease channel for the Cloud Service Mesh version.

    2. Apply the revision label to the namespace:

      forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio-injection-istio.io/rev=REVISION_LABEL--overwritedone

Install the HelloWorld service

Note: TheHelloWorld service example usesDocker Hub. In a privatecluster, the container runtime can pull container images fromArtifact Registryby default. The container runtime cannot pull images from any other containerimage registry on the internet. You can download the image and push to Artifact Registry or useCloud NAT to provide outbound internet access for certain private nodes. Formore information, seeMigrate external containersandCreating a private cluster.
  • Create the HelloWorld service in both clusters:

    kubectl create --context=${CTX_1} \    -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \    -l service=helloworld -n sample
    kubectl create --context=${CTX_2} \    -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \    -l service=helloworld -n sample

Deploy HelloWorld v1 and v2 to each cluster

  1. DeployHelloWorld v1 toCLUSTER_1 andv2 toCLUSTER_2, which helps later to verify cross-cluster load balancing:

    kubectl create --context=${CTX_1} \  -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \  -l version=v1 -n sample
    kubectl create --context=${CTX_2} \  -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \  -l version=v2 -n sample
  2. ConfirmHelloWorld v1 andv2 are running using the following commands. Verify that the output is similar to that shown.:

    kubectl get pod --context=${CTX_1} -n sample
    NAME                            READY     STATUS    RESTARTS   AGEhelloworld-v1-86f77cd7bd-cpxhv  2/2       Running   0          40s
    kubectl get pod --context=${CTX_2} -n sample
    NAME                            READY     STATUS    RESTARTS   AGEhelloworld-v2-758dd55874-6x4t8  2/2       Running   0          40s

Deploy the Sleep service

  1. Deploy theSleep service to both clusters. This pod generates artificial network traffic for demonstration purposes:

    forCTXin${CTX_1}${CTX_2}dokubectlapply--context=${CTX}\-f${SAMPLES_DIR}/samples/sleep/sleep.yaml-nsampledone
  2. Wait for theSleep service to start in each cluster. Verify that the output is similar to that shown:

    kubectl get pod --context=${CTX_1} -n sample -l app=sleep
    NAME                             READY   STATUS    RESTARTS   AGEsleep-754684654f-n6bzf           2/2     Running   0          5s
    kubectl get pod --context=${CTX_2} -n sample -l app=sleep
    NAME                             READY   STATUS    RESTARTS   AGEsleep-754684654f-dzl9j           2/2     Running   0          5s

Verify cross-cluster load balancing

Call theHelloWorld service several times and check the output to verifyalternating replies from v1 and v2:

  1. Call theHelloWorld service:

    kubectl exec --context="${CTX_1}" -n sample -c sleep \    "$(kubectl get pod --context="${CTX_1}" -n sample -l \    app=sleep -o jsonpath='{.items[0].metadata.name}')" \    -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'

    The output is similar to that shown:

    Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...
  2. Call theHelloWorld service again:

    kubectl exec --context="${CTX_2}" -n sample -c sleep \    "$(kubectl get pod --context="${CTX_2}" -n sample -l \    app=sleep -o jsonpath='{.items[0].metadata.name}')" \    -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'

    The output is similar to that shown:

    Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...

Congratulations, you've verified your load-balanced, multi-cluster Cloud Service Mesh!

Clean up HelloWorld service

When you finish verifying load balancing, remove theHelloWorld andSleepservice from your cluster.

kubectl delete ns sample --context ${CTX_1}kubectl delete ns sample --context ${CTX_2}

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.