Deploy an external multi-cluster Gateway Stay organized with collections Save and categorize content based on your preferences.
This document guides you through a practical example to deploy an externalmulti-cluster Gateway to route internet traffic to an application that runs in twodifferent GKE clusters.
Multi-cluster Gateways provide a powerful way to manage traffic for servicesdeployed across multiple GKE clusters. By using Google's globalload-balancing infrastructure, you can create a single entry point for yourapplications, which simplifies management and improves reliability.In this tutorial, you use a samplestore application to simulate a real-worldscenario where an online shopping service is owned and operated by separateteams and deployed across afleet ofshared GKE clusters.
Before you begin
Multi-cluster Gateways require some environmental preparation before they can bedeployed. Before you proceed, follow the steps inPrepare your environment for multi-clusterGateways:
Deploy GKE clusters.
Register your clusters to a fleet (if they aren't already).
Enable the multi-cluster Service and multi-cluster Gateway controllers.
Finally, review the GKE Gateway controllerlimitations and known issuesbefore you use the controller in your environment.
Multi-cluster, multi-region, external Gateway
In this tutorial, you create an external multi-cluster Gateway that servesexternal traffic across an application running in two GKEclusters.
In the following steps you:
- Deploy the sample
storeapplication to thegke-west-1andgke-east-1clusters. - Configure Services on each cluster to be exported intoyour fleet (multi-cluster Services).
- Deploy an external multi-cluster Gateway and an HTTPRouteto your config cluster (
gke-west-1).
After the application and Gateway resources are deployed, you can control traffic across the twoGKE clusters using path-based routing:
- Requests to
/westare routed tostorePods in thegke-west-1cluster. - Requests to
/eastare routed tostorePods in thegke-east-1cluster. - Requests to any other path are routed to either cluster, according to its health, capacity,and proximity to the requesting client.
Deploying the demo application
Create the
storeDeployment and Namespace in all three of the clustersthat were deployed inPrepare your environment for multi-cluster Gateways:kubectlapply--contextgke-west-1-fhttps://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yamlkubectlapply--contextgke-west-2-fhttps://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yamlkubectlapply--contextgke-east-1-fhttps://raw.githubusercontent.com/GoogleCloudPlatform/gke-networking-recipes/main/gateway/gke-gateway-controller/multi-cluster-gateway/store.yamlIt deploys the following resources to each cluster:
namespace/store createddeployment.apps/store createdAll examples in this page use the app deployed in this step. Make sure thatthe app is deployed across all three clusters before trying any of theremaining steps. This example uses only clusters
gke-west-1andgke-east-1, andgke-west-2is used in another example.
Multi-cluster Services
Services are how Pods are exposed to clients. Because the GKEGateway controlleruses container-native load balancing, it does not use the ClusterIP or Kubernetesload balancing to reach Pods. Traffic is sent directly from the load balancer tothe Pod IP addresses. However, Services still play a critical role as a logicalidentifier for Pod grouping.
Multi-cluster Services (MCS)is an API standard for Services that span clusters and its GKEcontroller provides service discovery across GKE clusters.The multi-cluster Gateway controller uses MCS API resources to group Pods into aService that is addressable across or spans multiple clusters.
The multi-cluster Services API defines the following custom resources:
- ServiceExports map to a Kubernetes Service, exporting the endpoints ofthat Service to all clusters registered to the fleet. When a Service hasa corresponding ServiceExport it means that the Service can be addressed bya multi-cluster Gateway.
- ServiceImports are automatically generated by the multi-cluster Servicecontroller. ServiceExport and ServiceImport come in pairs. If a ServiceExportexists in the fleet, then a corresponding ServiceImport is created to allowthe Service mapped to the ServiceExport to be accessed from across clusters.
Exporting Services works in the following way. A store Service exists ingke-west-1 which selects a group of Pods in that cluster. A ServiceExport iscreated in the cluster which lets the Pods ingke-west-1 become accessiblefrom the other clusters in the fleet. The ServiceExport maps to and exposesServices that have the same name and Namespace as the ServiceExport resource.
apiVersion:v1kind:Servicemetadata:name:storenamespace:storespec:selector:app:storeports:-port:8080targetPort:8080---kind:ServiceExportapiVersion:net.gke.io/v1metadata:name:storenamespace:storeThe following diagram shows what happens after a ServiceExport is deployed. If aServiceExport and Service pair exist then the multi-cluster Service controllerdeploys a corresponding ServiceImport to every GKE cluster inthe fleet. The ServiceImport is the local representation of thestore Servicein every cluster. This enables theclient Pod ingke-east-1 to use ClusterIP orheadless Services to reach thestore Pods ingke-west-1. When used in this mannermulti-cluster Services provide east-west load balancing between clusters withoutrequiring an internal LoadBalancer Service.To use multi-cluster Services for cluster-to-cluster load balancing, seeConfiguring multi-cluster Services.
Multi-cluster Gateways also use ServiceImports, but not for cluster-to-clusterload balancing. Instead, Gateways use ServiceImports as logical identifiers for aService that exists in another cluster or that stretches across multipleclusters. The following HTTPRoute references a ServiceImport instead of aService resource. By referencing a ServiceImport, this indicates that it isforwarding traffic to a group of backend Pods that run across one or moreclusters.
kind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1metadata:name:store-routenamespace:storelabels:gateway:multi-cluster-gatewayspec:parentRefs:-kind:Gatewaynamespace:storename:external-httphostnames:-"store.example.com"rules:-backendRefs:-group:net.gke.iokind:ServiceImportname:storeport:8080The following diagram shows how the HTTPRoute routesstore.example.com traffic tostore Pods ongke-west-1 andgke-east-1. The load balancer treats them as onepool of backends. If the Pods from one of the clusters becomes unhealthy,unreachable, or has no traffic capacity, then traffic load is balanced to theremaining Pods on the other cluster. New clusters can be added or removed withthestore Service and ServiceExport. This transparently adds or removesbackend Pods without any explicit routing configuration changes.
Exporting Services
At this point, the application is running across both clusters. Next, youexpose and export the applications by deploying Services and ServiceExports toeach cluster.
Apply the following manifest to the
gke-west-1cluster to create yourstoreandstore-west-1Services and ServiceExports:cat << EOF | kubectl apply --context gke-west-1 -f -apiVersion:v1kind:Servicemetadata:name:storenamespace:storespec:selector:app:storeports:-port:8080targetPort:8080---kind:ServiceExportapiVersion:net.gke.io/v1metadata:name:storenamespace:store---apiVersion:v1kind:Servicemetadata:name:store-west-1namespace:storespec:selector:app:storeports:-port:8080targetPort:8080---kind:ServiceExportapiVersion:net.gke.io/v1metadata:name:store-west-1namespace:storeEOFApply the following manifest to the
gke-east-1cluster to create yourstoreandstore-east-1Services and ServiceExports:cat << EOF | kubectl apply --context gke-east-1 -f -apiVersion:v1kind:Servicemetadata:name:storenamespace:storespec:selector:app:storeports:-port:8080targetPort:8080---kind:ServiceExportapiVersion:net.gke.io/v1metadata:name:storenamespace:store---apiVersion:v1kind:Servicemetadata:name:store-east-1namespace:storespec:selector:app:storeports:-port:8080targetPort:8080---kind:ServiceExportapiVersion:net.gke.io/v1metadata:name:store-east-1namespace:storeEOFVerify that the correct ServiceExports have been created in the clusters.
kubectlgetserviceexports--contextCLUSTER_NAME--namespacestoreReplaceCLUSTER_NAME with
gke-west-1andgke-east-1. The output resembles the following:# gke-west-1NAMEAGEstore2m40sstore-west-12m40s# gke-east-1NAMEAGEstore2m25sstore-east-12m25sThe output demonstrates that the
storeService containsstorePods across bothclusters, and thestore-west-1andstore-east-1Services only containstorePods on their respective clusters. These overlapping Services areused to target the Pods across multiple clusters or a subset of Pods on asingle cluster.After a few minutes verify that the accompanying
Note: The first MCS you create in your fleet can take up to 20 min to befully operational. Exporting new services after the first one is created oradding endpoints to existing Multi-cluster Services is faster (up to a fewminutes in some cases).ServiceImportshave beenautomatically created by the multi-cluster Services controller across allclusters in the fleet.kubectlgetserviceimports--contextCLUSTER_NAME--namespacestoreReplaceCLUSTER_NAME with
gke-west-1andgke-east-1. The output should resemble the following:# gke-west-1NAMETYPEIPAGEstoreClusterSetIP["10.112.31.15"]6m54sstore-east-1ClusterSetIP["10.112.26.235"]5m49sstore-west-1ClusterSetIP["10.112.16.112"]6m54s# gke-east-1NAMETYPEIPAGEstoreClusterSetIP["10.72.28.226"]5d10hstore-east-1ClusterSetIP["10.72.19.177"]5d10hstore-west-1ClusterSetIP["10.72.28.68"]4h32mThis demonstrates that all three Services are accessible from both clusters inthe fleet. However, because there is only a single active config cluster perfleet, you can only deploy Gateways and HTTPRoutes that reference theseServiceImports in
gke-west-1. When an HTTPRoute in the config clusterreferences these ServiceImports as backends, the Gateway can forwardtraffic to these Services no matter which cluster they are exported from.
Deploying the Gateway and HTTPRoute
Once the applications have been deployed, you can then configure a Gateway usingthegke-l7-global-external-managed-mc GatewayClass. This Gateway creates anexternal Application Load Balancer configured to distribute traffic across your target clusters.
Apply the following
Gatewaymanifest to the config cluster,gke-west-1in this example: Note: It might take several minutes (up to 10) for the Gateway to fullydeploy and serve traffic.cat << EOF | kubectl apply --context gke-west-1 -f -kind:GatewayapiVersion:gateway.networking.k8s.io/v1metadata:name:external-httpnamespace:storespec:gatewayClassName:gke-l7-global-external-managed-mclisteners:-name:httpprotocol:HTTPport:80allowedRoutes:kinds:-kind:HTTPRouteEOFThis Gateway configuration deploys external Application Load Balancer resources with the followingnaming convention:
gkemcg1-NAMESPACE-GATEWAY_NAME-HASH.The default resources created with this configuration are:
- 1 load balancer:
gkemcg1-store-external-http-HASH - 1 public IP address:
gkemcg1-store-external-http-HASH - 1 forwarding rule:
gkemcg1-store-external-http-HASH - 2 backend services:
- Default 404 backend service:
gkemcg1-store-gw-serve404-HASH - Default 500 backend service:
gkemcg1-store-gw-serve500-HASH
- Default 404 backend service:
- 1 Health check:
- Default 404 health check:
gkemcg1-store-gw-serve404-HASH
- Default 404 health check:
- 0 routing rules (URLmap is empty)
At this stage, any request to theGATEWAY_IP:80results in a default page displaying the following message:
Warning: In this example, we are deploying an external multi-cluster Gatewaylistening on port 80. If you are deploying an external multi-cluster Gateway fora production environment, you should ensure that you configure a set of featuresto properly secure your Gateway, depending on your context and your environment.To learn more about how to secure a multi-cluster Gateway, seeSecure a Gateway, or how toapply Policies to a multi-cluster Gateway, seeConfigure Gateway resources using Policies.fault filter abort.- 1 load balancer:
Apply the following
HTTPRoutemanifest to the config cluster,gke-west-1in this example:cat << EOF | kubectl apply --context gke-west-1 -f -kind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1metadata:name:public-store-routenamespace:storelabels:gateway:external-httpspec:hostnames:-"store.example.com"parentRefs:-name:external-httprules:-matches:-path:type:PathPrefixvalue:/westbackendRefs:-group:net.gke.iokind:ServiceImportname:store-west-1port:8080-matches:-path:type:PathPrefixvalue:/eastbackendRefs:-group:net.gke.iokind:ServiceImportname:store-east-1port:8080-backendRefs:-group:net.gke.iokind:ServiceImportname:storeport:8080EOFAt this stage, any request to theGATEWAY_IP:80results in a default page displaying the following message:
fault filter abort.After deployment, this HTTPRoute configures the following routing behavior:
- Requests to
/westare routed tostorePods in thegke-west-1cluster,because Pods selected by thestore-west-1ServiceExport only exist in thegke-west-1cluster. - Requests to
/eastare routed tostorePods in thegke-east-1cluster,because Pods selected by thestore-east-1ServiceExport only exist in thegke-east-1cluster. - Requests to any other path are routed to
storePods in either cluster,according to its health, capacity, and proximity to the requesting client. - Requests to theGATEWAY_IP:80 results in adefault page displaying the following message:
fault filter abort.
Note that if all the Pods on a given cluster are unhealthy (or don't exist)then traffic to the
storeService would only be sent to clusters that actuallyhavestorePods. The existence of a ServiceExport and Service on a givencluster does not guarantee that traffic is sent to that cluster. Pods mustexist and respond affirmatively to the load balancer health check or elsethe load balancer just sends traffic to healthystorePods in other clusters.New resources are created with this configuration:
- 3 backend services:
- The
storebackend service:gkemcg1-store-store-8080-HASH - The
store-east-1backend service:gkemcg1-store-store-east-1-8080-HASH - The
store-west-1backend service:gkemcg1-store-store-west-1-8080-HASH
- The
- 3 Health checks:
- The
storehealth check:gkemcg1-store-store-8080-HASH - The
store-east-1health check:gkemcg1-store-store-east-1-8080-HASH - The
store-west-1health check:gkemcg1-store-store-west-1-8080-HASH
- The
- 1 routing rule in the URLmap:
- The
store.example.comrouting rule: - 1 Host:
store.example.com - Multiple
matchRulesto route to the new backend services
- The
- Requests to
The following diagram shows the resources you've deployed across both clusters.Becausegke-west-1 is the Gateway config cluster, itis the cluster in which our Gateway, HTTPRoutes, and ServiceImports are watchedby the Gateway controller. Each cluster has astore ServiceImport and anotherServiceImport specific to that cluster. Both point at the same Pods. This letsthe HTTPRoute to specify exactly where traffic should go - to thestore Podson a specific cluster or to thestore Pods across all clusters.
Note that this is a logical resource model, not a depiction of the trafficflow. The traffic path goes directly from the load balancer to backend Podsand has no direct relation to whichever cluster is the config cluster.
Validating deployment
You can now issue requests to our multi-cluster Gateway and distribute trafficacross both GKE clusters.
Validate that the Gateway and HTTPRoute have been deployed successfully byinspecting the Gateway status and events.
kubectldescribegateways.gateway.networking.k8s.ioexternal-http--contextgke-west-1--namespacestoreYour output should look similar to the following:
Name: external-httpNamespace: storeLabels: <none>Annotations: networking.gke.io/addresses: /projects/PROJECT_NUMBER/global/addresses/gkemcg1-store-external-http-laup24msshu4 networking.gke.io/backend-services: /projects/PROJECT_NUMBER/global/backendServices/gkemcg1-store-gw-serve404-80-n65xmts4xvw2, /projects/PROJECT_NUMBER/global/backendServices/gke... networking.gke.io/firewalls: /projects/PROJECT_NUMBER/global/firewalls/gkemcg1-l7-default-global networking.gke.io/forwarding-rules: /projects/PROJECT_NUMBER/global/forwardingRules/gkemcg1-store-external-http-a5et3e3itxsv networking.gke.io/health-checks: /projects/PROJECT_NUMBER/global/healthChecks/gkemcg1-store-gw-serve404-80-n65xmts4xvw2, /projects/PROJECT_NUMBER/global/healthChecks/gkemcg1-s... networking.gke.io/last-reconcile-time: 2023-10-12T17:54:24Z networking.gke.io/ssl-certificates: networking.gke.io/target-http-proxies: /projects/PROJECT_NUMBER/global/targetHttpProxies/gkemcg1-store-external-http-94oqhkftu5yz networking.gke.io/target-https-proxies: networking.gke.io/url-maps: /projects/PROJECT_NUMBER/global/urlMaps/gkemcg1-store-external-http-94oqhkftu5yzAPI Version: gateway.networking.k8s.io/v1Kind: GatewayMetadata: Creation Timestamp: 2023-10-12T06:59:32Z Finalizers: gateway.finalizer.networking.gke.io Generation: 1 Resource Version: 467057 UID: 1dcb188e-2917-404f-9945-5f3c2e907b4cSpec: Gateway Class Name: gke-l7-global-external-managed-mc Listeners: Allowed Routes: Kinds: Group: gateway.networking.k8s.io Kind: HTTPRoute Namespaces: From: Same Name: http Port: 80 Protocol: HTTPStatus: Addresses: Type: IPAddress Value: 34.36.127.249 Conditions: Last Transition Time: 2023-10-12T07:00:41Z Message: The OSS Gateway API has deprecated this condition, do not depend on it. Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2023-10-12T07:00:41Z Message: Observed Generation: 1 Reason: Accepted Status: True Type: Accepted Last Transition Time: 2023-10-12T07:00:41Z Message: Observed Generation: 1 Reason: Programmed Status: True Type: Programmed Last Transition Time: 2023-10-12T07:00:41Z Message: The OSS Gateway API has altered the "Ready" condition semantics and reservedit for future use. GKE Gateway will stop emitting it in a future update, use "Programmed" instead. Observed Generation: 1 Reason: Ready Status: True Type: Ready Listeners: Attached Routes: 1 Conditions: Last Transition Time: 2023-10-12T07:00:41Z Message: Observed Generation: 1 Reason: Programmed Status: True Type: Programmed Last Transition Time: 2023-10-12T07:00:41Z Message: The OSS Gateway API has altered the "Ready" condition semantics and reservedit for future use. GKE Gateway will stop emitting it in a future update, use "Programmed" instead. Observed Generation: 1 Reason: Ready Status: True Type: Ready Name: http Supported Kinds: Group: gateway.networking.k8s.io Kind: HTTPRouteEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal UPDATE 35m (x4 over 10h) mc-gateway-controller store/external-http Normal SYNC 4m22s (x216 over 10h) mc-gateway-controller SYNC on store/external-http was a successOnce the Gateway has deployed successfully retrieve the external IP addressfrom
external-httpGateway.kubectlgetgateways.gateway.networking.k8s.ioexternal-http-o=jsonpath="{.status.addresses[0].value}"--contextgke-west-1--namespacestoreReplace
VIPin the following steps with the IPaddress you receive as output.Send traffic to the root path of the domain. This load balances trafficto the
storeServiceImport which is across clustergke-west-1andgke-east-1. The load balancer sends your traffic to the closest region toyou and you might not see responses from the other region.curl-H"host: store.example.com"http://VIPThe output confirms that the request was served by Pod from the
gke-east-1cluster:{"cluster_name":"gke-east-1","zone":"us-east1-b","host_header":"store.example.com","node_name":"gke-gke-east-1-default-pool-7aa30992-t2lp.c.agmsb-k8s.internal","pod_name":"store-5f5b954888-dg22z","pod_name_emoji":"⏭","project_id":"agmsb-k8s","timestamp":"2021-06-01T17:32:51"}Next send traffic to the
/westpath. This routes traffic to thestore-west-1ServiceImport which only has Pods running on thegke-west-1cluster. A cluster-specific ServiceImport, likestore-west-1, enables anapplication owner to explicitly send traffic to a specific cluster, ratherthan letting the load balancer make the decision.curl-H"host: store.example.com"http://VIP/westThe output confirms that the request was served by Pod from the
gke-west-1cluster:{"cluster_name":"gke-west-1","zone":"us-west1-a","host_header":"store.example.com","node_name":"gke-gke-west-1-default-pool-65059399-2f41.c.agmsb-k8s.internal","pod_name":"store-5f5b954888-d25m5","pod_name_emoji":"🍾","project_id":"agmsb-k8s","timestamp":"2021-06-01T17:39:15",}Finally, send traffic to the
/eastpath.curl-H"host: store.example.com"http://VIP/eastThe output confirms that the request was served by Pod from the
gke-east-1cluster:{"cluster_name":"gke-east-1","zone":"us-east1-b","host_header":"store.example.com","node_name":"gke-gke-east-1-default-pool-7aa30992-7j7z.c.agmsb-k8s.internal","pod_name":"store-5f5b954888-hz6mw","pod_name_emoji":"🧜🏾","project_id":"agmsb-k8s","timestamp":"2021-06-01T17:40:48"}
Clean up
After completing the exercises on this document, follow these steps to removeresources and prevent unwanted charges incurring on your account:
Unregister the clustersfrom the fleet if they don't need to be registered for another purpose.
Disable the
multiclusterservicediscoveryfeature:gcloudcontainerfleetmulti-cluster-servicesdisableDisable Multi Cluster Ingress:
gcloudcontainerfleetingressdisableDisable the APIs:
gcloudservicesdisable\multiclusterservicediscovery.googleapis.com\multiclusteringress.googleapis.com\trafficdirector.googleapis.com\--project=PROJECT_ID
Troubleshooting
No healthy upstream
Symptom:
The following issue might occur when you create a Gateway butcannot access the backend services (503 response code):
no healthy upstreamReason:
This error message indicates that the health check prober cannot find healthybackend services. It is possible that your backend services are healthybut you might need to customize the health checks.
Workaround:
To resolve this issue,customize your health check based on your application'srequirements (for example,/health) using aHealthCheckPolicy.
What's next
- Learn more about theGateway controller.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.