Set up a multi-cluster mesh on managed Cloud Service Mesh
Note: This guide only supports Cloud Service Mesh with Istio APIs and doesnot support Google Cloud APIs. For more information see,Cloud Service Mesh overview.This guide explains how to join two clusters into a single Cloud Service Mesh usingMesh CA orCertificate Authority Serviceand enable cross-cluster load balancing. You can easily extendthis process to incorporate any number of clusters into your mesh.
A multi-cluster Cloud Service Mesh configuration can solve several crucial enterprisescenarios, such as scale, location, and isolation. For more information, seeMulti-cluster use cases.
Prerequisites
This guide assumes that you have two or more Google CloudGKE clusters that meet the following requirements:
- Cloud Service Mesh installed on the clusters. You need
asmcli, theistioctltool, and samples thatasmclidownloads to thedirectory that you specified in--output_dir. - Clusters in your mesh must have connectivity between all pods before youconfigure Cloud Service Mesh. Additionally, if you join clusters that are not inthe same project, they must be registered to the samefleet host project,and the clusters must be in ashared VPC configurationtogether on the same network. We also recommend that you have one project tohost the Shared VPC, and two service projects for creating clusters. For moreinformation, seeSetting up clusters with Shared VPC.
- If you use Certificate Authority Service, all clusters must have their respective subordinate CA pools chain to the same root CA pool. Otherwise, all of them will need to use the same CA pool.
Setting project and cluster variables
Create the following environment variables for the project ID, clusterzone or region, cluster name, and context.
exportPROJECT_1=PROJECT_ID_1exportLOCATION_1=CLUSTER_LOCATION_1exportCLUSTER_1=CLUSTER_NAME_1exportCTX_1="gke_${PROJECT_1}_${LOCATION_1}_${CLUSTER_1}"exportPROJECT_2=PROJECT_ID_2exportLOCATION_2=CLUSTER_LOCATION_2exportCLUSTER_2=CLUSTER_NAME_2exportCTX_2="gke_${PROJECT_2}_${LOCATION_2}_${CLUSTER_2}"If these are newly created clusters, ensure to fetch credentials for eachcluster with the following
gcloudcommands otherwise their associatedcontextwill not be available for use in the next steps of this guide.The commands depend on your cluster type, either regional or zonal:
Regional
gcloud container clusters get-credentials ${CLUSTER_1} --region ${LOCATION_1}gcloud container clusters get-credentials ${CLUSTER_2} --region ${LOCATION_2}Zonal
gcloud container clusters get-credentials ${CLUSTER_1} --zone ${LOCATION_1}gcloud container clusters get-credentials ${CLUSTER_2} --zone ${LOCATION_2}
Create firewall rule
In some cases, you need to create a firewall rule to allow cross-clustertraffic. For example, you need to create a firewall rule if:
- You use different subnets for the clusters in your mesh.
- Your Pods open ports other than 443 and 15002.
GKE automatically adds firewall rules to each node to allowtraffic within the same subnet. If your mesh contains multiple subnets, you mustexplicitly set up the firewall rules to allow cross-subnet traffic. You mustadd a new firewall rulefor each subnet to allow the source IP CIDR blocks and targets ports of all theincoming traffic.
The following instructions allow communication between all clusters in yourproject or only between$CLUSTER_1 and$CLUSTER_2.
Gather information about your clusters' network.
All project clusters
If the clusters are in the same project, you can use the following commandto allow communication between all clusters in your project. If there areclusters in your project that you don't want to expose, use the command intheSpecific clusters tab.
functionjoin_by{localIFS="$1";shift;echo"$*";}ALL_CLUSTER_CIDRS=$(gcloudcontainerclusterslist--project$PROJECT_1--format='value(clusterIpv4Cidr)'|sort|uniq)ALL_CLUSTER_CIDRS=$(join_by,$(echo"${ALL_CLUSTER_CIDRS}"))ALL_CLUSTER_NETTAGS=$(gcloudcomputeinstanceslist--project$PROJECT_1--format='value(tags.items.[0])'|sort|uniq)ALL_CLUSTER_NETTAGS=$(join_by,$(echo"${ALL_CLUSTER_NETTAGS}"))Specific clusters
The following command allows communication between
$CLUSTER_1and$CLUSTER_2and doesn't expose other clusters in your project.functionjoin_by{localIFS="$1";shift;echo"$*";}ALL_CLUSTER_CIDRS=$(forPin$PROJECT_1$PROJECT_2;dogcloud--project$Pcontainerclusterslist--filter="name:($CLUSTER_1,$CLUSTER_2)"--format='value(clusterIpv4Cidr)';done|sort|uniq)ALL_CLUSTER_CIDRS=$(join_by,$(echo"${ALL_CLUSTER_CIDRS}"))ALL_CLUSTER_NETTAGS=$(forPin$PROJECT_1$PROJECT_2;dogcloud--project$Pcomputeinstanceslist--filter="name:($CLUSTER_1,$CLUSTER_2)"--format='value(tags.items.[0])';done|sort|uniq)ALL_CLUSTER_NETTAGS=$(join_by,$(echo"${ALL_CLUSTER_NETTAGS}"))Create the firewall rule.
GKE
gcloudcomputefirewall-rulescreateistio-multicluster-pods\--allow=tcp,udp,icmp,esp,ah,sctp\--direction=INGRESS\--priority=900\--source-ranges="${ALL_CLUSTER_CIDRS}"\--target-tags="${ALL_CLUSTER_NETTAGS}"--quiet\--network=YOUR_NETWORKAutopilot
TAGS=""forCLUSTERin${CLUSTER_1}${CLUSTER_2}doTAGS+=$(gcloudcomputefirewall-ruleslist--filter="Name:$CLUSTER*"--format="value(targetTags)"|uniq) &&TAGS+=","doneTAGS=${TAGS::-1}echo"Network tags for pod ranges are$TAGS"gcloudcomputefirewall-rulescreateasm-multicluster-pods\--allow=tcp,udp,icmp,esp,ah,sctp\--network=gke-cluster-vpc\--direction=INGRESS\--priority=900--network=VPC_NAME\--source-ranges="${ALL_CLUSTER_CIDRS}"\--target-tags=$TAGS
Configure endpoint discovery
Note: For more information on endpoint discovery, refer toEndpoint discovery with multiple control planes.Warning: If endpoint discovery is enabled between clusters, interclusterservices are not able to communicate with each other without proper DNSconfiguration. This is because GKE services depend on eachservice's fully qualified domain name (FQDN) for traffic routing. You can enableDNS proxy,to allow intercluster discovery. DNS proxy captures all DNS requests andresolves them using information from the Control Plane.Enable endpoint discovery between public or private clusters with declarative API
Enabling managed Cloud Service Mesh with the fleet API will enable endpoint discoveryfor this cluster. If you provisioned managed Cloud Service Mesh with a different tool, you can manually enable endpoint discovery across public orprivate clusters in a fleet by applying the config"multicluster_mode":"connected" in theasm-options configmap. Clusters with this config enabled in the same fleetwill have cross-cluster service discovery automatically enabled between eachother.
This is the only way to configure multi-cluster endpoint discovery if you have theManaged (TD)control plane implementation,and the recommended way to configure it if you have the Managed (Istiod) implementation.
Before proceeding, you must havecreated a firewall rule.
Enable
If theasm-options configmapalready exists in your cluster, then enableendpoint discovery for the cluster:
kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"connected"}}'If theasm-options configmapdoesn't yet exist in your cluster, thencreate it with the associated data and enable endpoint discovery for thecluster:
kubectl --context ${CTX_1} create configmap asm-options -n istio-system --from-file <(echo '{"data":{"multicluster_mode":"connected"}}')Disable
Disable endpoint discovery for a cluster:
kubectl patch configmap/asm-options -n istio-system --type merge -p '{"data":{"multicluster_mode":"manual"}}'If you unregister a cluster from the fleet without disabling endpoint discovery,secrets could remain in the cluster. You must manually clean up any remainingsecrets.
Run the following command to find secrets requiring cleanup:
kubectl get secrets -n istio-system -l istio.io/owned-by=mesh.googleapis.com,istio/multiCluster=trueDelete each secret:
kubectl delete secretSECRET_NAMERepeat this step for each remaining secret.
Verify multi-cluster connectivity
This section explains how to deploy the sampleHelloWorld andSleep servicesto your multi-cluster environment to verify that cross-cluster loadbalancing works.
samples directory included in theistioctl tar file (located inOUTPUT_DIR/istio-${ASM_VERSION%+*}/samples) and not thesamples directory downloaded byasmcli (located inOUTPUT_DIR/samples).Set variable for samples directory
Navigate to where
asmcliwas downloaded, and run the following command to setASM_VERSIONexportASM_VERSION="$(./asmcli--version)"Set a working folder to the samples that you use to verify thatcross-cluster load balancing works. The samples are located in asubdirectory in the
--output_dirdirectory that you specified in theasmcli installcommand. In the following command, changeOUTPUT_DIRto the directory that you specified in--output_dir.exportSAMPLES_DIR=OUTPUT_DIR/istio-${ASM_VERSION%+*}
Enable sidecar injection
Create the sample namespace in each cluster.
forCTXin${CTX_1}${CTX_2}dokubectlcreate--context=${CTX}namespacesampledoneEnable the namespace for injection. The steps depend on yourcontrol plane implementation.
Managed (TD)
- Apply the default injection label to the namespace:
forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio.io/rev-istio-injection=enabled--overwritedoneManaged (Istiod)
Recommended: Run the following command to apply the default injection label to the namespace:
forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio.io/rev-istio-injection=enabled--overwritedoneIf you are an existing user with the Managed Istiod control plane:We recommend that you use default injection, but revision-based injection issupported. Use the following instructions:
Run the following command to locate the available release channels:
kubectl-nistio-systemgetcontrolplanerevisionThe output is similar to the following:
Note: If two control plane revisions appear in the earlier list, remove one. Having multiple control plane channels in the cluster is not supported.NAME AGEasm-managed-rapid 6d7hIn the output, the value under the
NAMEcolumn is the revision label that corresponds to the availablerelease channel for the Cloud Service Mesh version.Apply the revision label to the namespace:
forCTXin${CTX_1}${CTX_2}dokubectllabel--context=${CTX}namespacesample\istio-injection-istio.io/rev=REVISION_LABEL--overwritedone
Install the HelloWorld service
Note: TheHelloWorld service example usesDocker Hub. In a privatecluster, the container runtime can pull container images fromArtifact Registryby default. The container runtime cannot pull images from any other containerimage registry on the internet. You can download the image and push to Artifact Registry or useCloud NAT to provide outbound internet access for certain private nodes. Formore information, seeMigrate external containersandCreating a private cluster.Create the HelloWorld service in both clusters:
kubectl create --context=${CTX_1} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n samplekubectl create --context=${CTX_2} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l service=helloworld -n sample
Deploy HelloWorld v1 and v2 to each cluster
Deploy
HelloWorld v1toCLUSTER_1andv2toCLUSTER_2, which helps later to verify cross-cluster load balancing:kubectl create --context=${CTX_1} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l version=v1 -n samplekubectl create --context=${CTX_2} \ -f ${SAMPLES_DIR}/samples/helloworld/helloworld.yaml \ -l version=v2 -n sampleConfirm
HelloWorld v1andv2are running using the following commands. Verify that the output is similar to that shown.:kubectl get pod --context=${CTX_1} -n sampleNAME READY STATUS RESTARTS AGEhelloworld-v1-86f77cd7bd-cpxhv 2/2 Running 0 40s
kubectl get pod --context=${CTX_2} -n sampleNAME READY STATUS RESTARTS AGEhelloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s
Deploy the Sleep service
Deploy the
Sleepservice to both clusters. This pod generates artificial network traffic for demonstration purposes:forCTXin${CTX_1}${CTX_2}dokubectlapply--context=${CTX}\-f${SAMPLES_DIR}/samples/sleep/sleep.yaml-nsampledoneWait for the
Sleepservice to start in each cluster. Verify that the output is similar to that shown:kubectl get pod --context=${CTX_1} -n sample -l app=sleepNAME READY STATUS RESTARTS AGEsleep-754684654f-n6bzf 2/2 Running 0 5s
kubectl get pod --context=${CTX_2} -n sample -l app=sleepNAME READY STATUS RESTARTS AGEsleep-754684654f-dzl9j 2/2 Running 0 5s
Verify cross-cluster load balancing
Call theHelloWorld service several times and check the output to verifyalternating replies from v1 and v2:
Call the
HelloWorldservice:kubectl exec --context="${CTX_1}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_1}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'The output is similar to that shown:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...
Call the
HelloWorldservice again:kubectl exec --context="${CTX_2}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_2}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'The output is similar to that shown:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...
Congratulations, you've verified your load-balanced, multi-cluster Cloud Service Mesh!
Keeping traffic in-cluster
In some cases the default cross-cluster load balancing behavior is not desirable. To keep traffic "cluster-local" (i.e. traffic sent fromcluster-a will only reach destinations incluster-a), mark hostnames or wildcards asclusterLocal usingMeshConfig.serviceSettings.
For example, you can enforce cluster-local traffic for an individual service, all services in a particular namespace, or globally for all services in the mesh, as follows:
per-service
serviceSettings:- settings: clusterLocal: true hosts: - "mysvc.myns.svc.cluster.local"per-namespace
serviceSettings:- settings: clusterLocal: true hosts: - "*.myns.svc.cluster.local"global
serviceSettings:- settings: clusterLocal: true hosts: - "*"You can also refine service access by setting a global cluster-local rule and adding explicit exceptions, which can be specific or wildcard. In the following example, all services in the cluster will be kept cluster-local, except any service in the myns namespace:
serviceSettings:- settings: clusterLocal: true hosts: - "*"- settings: clusterLocal: false hosts: - "*.myns.svc.cluster.local"Enable the Local Cluster Service
Check the MeshConfig config map in the cluster
kubectlgetconfigmap-nistio-systemYou should see a config map with one of the names
istio-asm-managed,istio-asm-managed-rapidoristio-asm-managed-stable.If you have migrated from the
ISTIODimplementation to theTRAFFIC_DIRECTORimplementation, you might see more than one config map. In this case, you can determine the channel by running the following command:kubectlgetcontrolplanerevision-nistio-systemThe channel of the reconciled Control Plane Revision is the one you want to pick.
Update the Config Map
cat<<EOF >config.yamlapiVersion:v1kind:ConfigMapmetadata:name:CONFIGMAP_NAMEnamespace:istio-systemdata:mesh:|-serviceSettings:-settings:clusterLocal:truehosts:-"*"EOFReplace
CONFIGMAP_NAMEwith the name of the Config Map you found in step 1 and update the config map.kubectlapply--context=${CTX_1}-fconfig.yamlConfirm the Local Cluster feature are working as expected using the following commands. The output for calling
HelloWorldwithCTX_1is similar to:kubectl exec --context="${CTX_1}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_1}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'You should seeOnly v1 is response in the output:
Hello version: v1, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...If you call the
HelloWorldwithCTX_2:kubectl exec --context="${CTX_2}" -n sample -c sleep \ "$(kubectl get pod --context="${CTX_2}" -n sample -l \ app=sleep -o jsonpath='{.items[0].metadata.name}')" \ -- /bin/sh -c 'for i in $(seq 1 20); do curl -sS helloworld.sample:5000/hello; done'You should see alternating replies from v1 and v2 in the output.
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv...
Clean up HelloWorld service
When you finish verifying load balancing, remove theHelloWorld andSleepservice from your cluster.
kubectl delete ns sample --context ${CTX_1}kubectl delete ns sample --context ${CTX_2}Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.