From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh Stay organized with collections Save and categorize content based on your preferences.
This document shows you how accomplish the following tasks:
- Deployglobally distributed applications exposed through GKE Gateway and Cloud Service Mesh.
- Expose an application to multiple clients by combiningCloud Load Balancing with Cloud Service Mesh.
- Integrate load balancers with a service mesh deployed acrossmultipleGoogle Cloud regions.
This deployment guide is intended for platform administrators. It'salso intended for advanced practitioners who run Cloud Service Mesh.The instructions also work for Istio on GKE.
Architecture
The following diagram shows the default ingress topology of a service mesh—anexternal TCP/UDP load balancer that exposes the ingress gateway proxies on asingle cluster:
This deployment guide uses Google Kubernetes Engine (GKE) Gateway resources.Specifically, it uses amulti-cluster gateway to configure multi-region load balancing in front of multipleAutopilot clusters that are distributed across two regions.
The preceding diagram shows how data flows through cloud ingress and meshingress scenarios. For more information, see the explanation of thearchitecture diagram in the associated reference architecture document.
Objectives
- Deploy a pair ofGKE Autopilot clusters on Google Cloud to the samefleet.
- Deploy an Istio-basedCloud Service Mesh to the same fleet.
- Configure a load balancer usingGKE Gateway to terminate public HTTPS traffic.
- Direct public HTTPS traffic to applicationshosted by Cloud Service Mesh that are deployed across multipleclusters and regions.
- Deploy thewhereami sample application to both Autopilot clusters.
Cost optimization
In this document, you use the following billable components of Google Cloud:
- Google Kubernetes Engine
- Cloud Load Balancing
- Cloud Service Mesh
- Multi Cluster Ingress
- Google Cloud Armor
- Certificate Manager
- Cloud Endpoints
To generate a cost estimate based on your projected usage, use thepricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, seeClean up.
Before you begin
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
In the Google Cloud console, activate Cloud Shell.
You run all of the terminal commands for this deployment from Cloud Shell.
Set your default Google Cloud project:
exportPROJECT=YOUR_PROJECTexportPROJECT_NUMBER=$(gcloudprojectsdescribePROJECT_ID--format="value(projectNumber)")gcloudconfigsetprojectPROJECT_IDReplace
PROJECT_IDwith the ID of the project thatyou want to use for this deployment.Create a working directory:
mkdir-p${HOME}/edge-to-mesh-multi-regioncd${HOME}/edge-to-mesh-multi-regionexportWORKDIR=`pwd`
Create GKE clusters
In this section, you create GKE clusters to host the applications andsupporting infrastructure, which you create later in this deployment guide.
In Cloud Shell, create a new
kubeconfigfile. This stepensures that you don't create a conflict with your existing (default)kubeconfigfile.touchedge2mesh_mr_kubeconfigexportKUBECONFIG=${WORKDIR}/edge2mesh_mr_kubeconfigDefine the environment variables that are used when creating theGKE clusters and the resources within them. Modify the default regionchoices to suit your purposes.
exportCLUSTER_1_NAME=edge-to-mesh-01exportCLUSTER_2_NAME=edge-to-mesh-02exportCLUSTER_1_REGION=us-central1exportCLUSTER_2_REGION=us-east4exportPUBLIC_ENDPOINT=frontend.endpoints.PROJECT_ID.cloud.googEnable the Google Cloud APIs that are used throughout this guide:
gcloudservicesenable\container.googleapis.com\mesh.googleapis.com\gkehub.googleapis.com\multiclusterservicediscovery.googleapis.com\multiclusteringress.googleapis.com\trafficdirector.googleapis.com\certificatemanager.googleapis.comCreate a GKE Autopilot clusterwith private nodes in
CLUSTER_1_REGION. Use the--asyncflag to avoid waiting for thefirst cluster to provision and register to the fleet:gcloudcontainerclusterscreate-auto--async\${CLUSTER_1_NAME}--region${CLUSTER_1_REGION}\--release-channelrapid--labelsmesh_id=proj-${PROJECT_NUMBER}\--enable-private-nodes--enable-fleetCreate and register a second Autopilot cluster in
CLUSTER_2_REGION:gcloudcontainerclusterscreate-auto\${CLUSTER_2_NAME}--region${CLUSTER_2_REGION}\--release-channelrapid--labelsmesh_id=proj-${PROJECT_NUMBER}\--enable-private-nodes--enable-fleetEnsure that the clusters are running. It might take up to 20 minutes untilall clusters are running:
gcloudcontainerclusterslistThe output is similar to the following:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUSedge-to-mesh-01 us-central1 1.27.5-gke.200 34.27.171.241 e2-small 1.27.5-gke.200 RUNNINGedge-to-mesh-02 us-east4 1.27.5-gke.200 35.236.204.156 e2-small 1.27.5-gke.200 RUNNING
Gather the credentials for
CLUSTER_1_NAME.You createdCLUSTER_1_NAMEasynchronously so you could run additional commands whilethe cluster provisioned.gcloudcontainerclustersget-credentials${CLUSTER_1_NAME}\--region${CLUSTER_1_REGION}To clarify the names of the Kubernetes contexts, rename them to thenames of the clusters:
kubectlconfigrename-contextgke_PROJECT_ID_${CLUSTER_1_REGION}_${CLUSTER_1_NAME}${CLUSTER_1_NAME}kubectlconfigrename-contextgke_PROJECT_ID_${CLUSTER_2_REGION}_${CLUSTER_2_NAME}${CLUSTER_2_NAME}
Install a service mesh
In this section, you configure themanaged Cloud Service Mesh with fleet API.Using the fleet API to enable Cloud Service Mesh provides adeclarative approach to provision a service mesh.
In Cloud Shell, enable Cloud Service Mesh on the fleet:
gcloudcontainerfleetmeshenableEnable automatic control plane and data plane management:
gcloudcontainerfleetmeshupdate\--managementautomatic\--memberships${CLUSTER_1_NAME},${CLUSTER_2_NAME}Wait about 20 minutes. Then verify that the control plane status is
ACTIVE:gcloudcontainerfleetmeshdescribeThe output is similar to the following:
createTime: '2023-11-30T19:23:21.713028916Z'membershipSpecs: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: mesh: management: MANAGEMENT_AUTOMATIC projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: mesh: management: MANAGEMENT_AUTOMATICmembershipStates: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:21.333579005Z' projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:24.674852751Z'name: projects/e2m-private-test-01/locations/global/features/servicemeshresourceState: state: ACTIVEspec: {}updateTime: '2024-06-04T17:16:28.730429993Z'
Deploy an external Application Load Balancer and create ingress gateways
In this section, you deploy an external Application Load Balancer through theGKE Gateway controller and create ingress gateways for bothclusters. Thegateway andgatewayClass resources automate the provisioningof the load balancer and backend health checking. To provide TLS termination onthe load balancer, you createCertificate Manager resources and attach them to the load balancer. Additionally, you useEndpoints to automatically provision a public DNS name for the application.
Install an ingress gateway on both clusters
As a security best practice, we recommend that you deploy the ingress gatewayin a different namespace from the mesh control plane.
In Cloud Shell, create a dedicated
asm-ingressnamespace oneach cluster:kubectl--context=${CLUSTER_1_NAME}createnamespaceasm-ingresskubectl--context=${CLUSTER_2_NAME}createnamespaceasm-ingressAdd a namespace label to the
asm-ingressnamespaces:kubectl--context=${CLUSTER_1_NAME}labelnamespaceasm-ingressistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}labelnamespaceasm-ingressistio-injection=enabledThe output is similar to the following:
namespace/asm-ingress labeled
Labeling the
asm-ingressnamespaces withistio-injection=enabledinstructs Cloud Service Mesh to automatically inject Envoy sidecarproxies when a pod is deployed.Generate a self-signed certificate for future use:
opensslreq-new-newkeyrsa:4096-days365-nodes-x509\-subj"/CN=frontend.endpoints.PROJECT_ID.cloud.goog/O=Edge2Mesh Inc"\-keyout${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\-out${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crtThe certificate provides an additional layer of encryption between theload balancer and the service mesh ingress gateways. It also enablessupport for HTTP/2-based protocols like gRPC. Instructions about how toattach the self-signed certificate to the ingress gateways are providedlater inCreate external IP address, DNS record, and TLS certificate resources.
For more information about the requirements of the ingress gatewaycertificate, seeEncryption from the load balancer to the backends.
Create a Kubernetes secret on each cluster to store the self-signedcertificate:
kubectl--context${CLUSTER_1_NAME}-nasm-ingresscreatesecrettls\edge2mesh-credential\--key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\--cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crtkubectl--context${CLUSTER_2_NAME}-nasm-ingresscreatesecrettls\edge2mesh-credential\--key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\--cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crtTo integrate with external Application Load Balancer, create akustomize variant to configure theingress gateway resources:
mkdir-p${WORKDIR}/asm-ig/basecat<<EOF >${WORKDIR}/asm-ig/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/anthos-service-mesh-samples/docs/ingress-gateway-asm-manifests/baseEOFmkdir${WORKDIR}/asm-ig/variantcat<<EOF >${WORKDIR}/asm-ig/variant/role.yamlapiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:name:asm-ingressgatewaynamespace:asm-ingressrules:-apiGroups:[""]resources:["secrets"]verbs:["get","watch","list"]EOFcat<<EOF >${WORKDIR}/asm-ig/variant/rolebinding.yamlapiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:asm-ingressgatewaynamespace:asm-ingressroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:asm-ingressgatewaysubjects:-kind:ServiceAccountname:asm-ingressgatewayEOFcat<<EOF >${WORKDIR}/asm-ig/variant/service-proto-type.yamlapiVersion:v1kind:Servicemetadata:name:asm-ingressgatewaynamespace:asm-ingressspec:ports:-name:status-portport:15021protocol:TCPtargetPort:15021-name:httpport:80targetPort:8080appProtocol:HTTP-name:httpsport:443targetPort:8443appProtocol:HTTP2type:ClusterIPEOFcat<<EOF >${WORKDIR}/asm-ig/variant/gateway.yamlapiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:asm-ingressgatewaynamespace:asm-ingressspec:servers:-port:number:443name:httpsprotocol:HTTPShosts:-"*"# IMPORTANT: Must use wildcard here when using SSL, as SNI isn't passed from GFEtls:mode:SIMPLEcredentialName:edge2mesh-credentialEOFcat<<EOF >${WORKDIR}/asm-ig/variant/kustomization.yamlnamespace:asm-ingressresources:-../base-role.yaml-rolebinding.yamlpatches:-path:service-proto-type.yamltarget:kind:Service-path:gateway.yamltarget:kind:GatewayEOFApply the ingress gateway configuration to both clusters:
kubectl--context${CLUSTER_1_NAME}apply-k${WORKDIR}/asm-ig/variantkubectl--context${CLUSTER_2_NAME}apply-k${WORKDIR}/asm-ig/variant
Expose ingress gateway pods to the load balancer using a multi-cluster service
In this section, you export the ingress gateway pods through aServiceExportcustom resource. You must export the ingress gateway podsthrough aServiceExport custom resource for the following reasons:
- Allows the load balancer toaddress the ingress gateway pods across multiple clusters.
- Allows the ingress gateway pods to proxy requests to services runningwithin the service mesh.
In Cloud Shell, enable multi-cluster Services (MCS)for the fleet:
gcloudcontainerfleetmulti-cluster-servicesenableGrant MCS the required IAM permissions to the project or fleet:
gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member"serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]"\--role"roles/compute.networkViewer"Create the
ServiceExportYAML file:cat<<EOF >${WORKDIR}/svc_export.yamlkind:ServiceExportapiVersion:net.gke.io/v1metadata:name:asm-ingressgatewaynamespace:asm-ingressEOFApply the
ServiceExportYAML file to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/svc_export.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/svc_export.yamlIf you receive the following error message, wait a few moments forthe MCS custom resource definitions (CRDs) to install. Thenre-run the commands to apply the
ServiceExportYAML file to both clusters.error:resourcemappingnotfoundforname:"asm-ingressgateway"namespace:"asm-ingress"from"svc_export.yaml":nomatchesforkind"ServiceExport"inversion"net.gke.io/v1"ensureCRDsareinstalledfirst
Create external IP address, DNS record, and TLS certificate resources
In this section, you create networking resources that support theload-balancing resources that you create later in this deployment.
In Cloud Shell, reserve a static external IP address:
gcloudcomputeaddressescreatemcg-ip--globalA static IP address is used by the GKE Gatewayresource. It lets the IP address remain the same, even if the external loadbalancer is recreated.
Get the static IP address and store it as an environment variable:
exportMCG_IP=$(gcloudcomputeaddressesdescribemcg-ip--global--format"value(address)")echo${MCG_IP}To create a stable, human-friendly mapping to your Gateway IP address,you must have a public DNS record.
You can use any DNS provider and automation scheme that you want. Thisdeployment uses Endpoints instead of creating a managed DNSzone. Endpoints provides a freeGoogle-managed DNS record for an external IP address.
Run the following command to create a YAML file named
dns-spec.yaml:cat<<EOF >${WORKDIR}/dns-spec.yamlswagger:"2.0"info:description:"Cloud Endpoints DNS"title:"Cloud Endpoints DNS"version:"1.0.0"paths:{}host:"frontend.endpoints.PROJECT_ID.cloud.goog"x-google-endpoints:-name:"frontend.endpoints.PROJECT_ID.cloud.goog"target:"${MCG_IP}"EOFThe
dns-spec.yamlfile defines the public DNS record in the form offrontend.endpoints.PROJECT_ID.cloud.goog, wherePROJECT_IDis yourunique project identifier.Deploy the
dns-spec.yamlfile to create the DNS entry. This processtakes a few minutes.gcloudendpointsservicesdeploy${WORKDIR}/dns-spec.yamlCreate acertificate using Certificate Manager for the DNS entry name you created in the previous step:
gcloudcertificate-managercertificatescreatemcg-cert\--domains="frontend.endpoints.PROJECT_ID.cloud.goog"A Google-managed TLS certificate is used to terminate inbound clientrequests at the load balancer.
Create acertificate map:
gcloudcertificate-managermapscreatemcg-cert-mapThe load balancer references the certificate through the certificate mapentry you create in the next step.
Create acertificate map entry for the certificate you created earlier in this section:
gcloudcertificate-managermapsentriescreatemcg-cert-map-entry\--map="mcg-cert-map"\--certificates="mcg-cert"\--hostname="frontend.endpoints.PROJECT_ID.cloud.goog"
Create backend service policies and load balancer resources
In this section you accomplish the following tasks;
- Create a Cloud Armor security policy with rules.
- Create a policy that lets the load balancer check the responsiveness ofthe ingress gateway pods through the
ServiceExportYAML file you createdearlier. - Use the GKE Gateway API to create a load balancerresource.
- Use the
GatewayClasscustom resource to set the specific load balancertype. - Enable multi-cluster load balancing for the fleet and designate one ofthe clusters as theconfiguration cluster for the fleet.
In Cloud Shell, create a Cloud Armor security policy:
gcloudcomputesecurity-policiescreateedge-fw-policy\--description"Block XSS attacks"Create a rule for the security policy:
gcloudcomputesecurity-policiesrulescreate1000\--security-policyedge-fw-policy\--expression"evaluatePreconfiguredExpr('xss-stable')"\--action"deny-403"\--description"XSS attack filtering"Create a YAML file for the security policy, and reference the
ServiceExportYAML file through a correspondingServiceImportYAML file:cat<<EOF >${WORKDIR}/cloud-armor-backendpolicy.yamlapiVersion:networking.gke.io/v1kind:GCPBackendPolicymetadata:name:cloud-armor-backendpolicynamespace:asm-ingressspec:default:securityPolicy:edge-fw-policytargetRef:group:net.gke.iokind:ServiceImportname:asm-ingressgatewayEOFApply the Cloud Armor policy to both clusters:
kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/cloud-armor-backendpolicy.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/cloud-armor-backendpolicy.yamlCreate a custom YAML file that lets the load balancer perform healthchecks against the Envoy health endpoint (port
15021on path/healthz/ready) of the ingress gateway pods in both clusters:cat<<EOF >${WORKDIR}/ingress-gateway-healthcheck.yamlapiVersion:networking.gke.io/v1kind:HealthCheckPolicymetadata:name:ingress-gateway-healthchecknamespace:asm-ingressspec:default:config:httpHealthCheck:port:15021portSpecification:USE_FIXED_PORTrequestPath:/healthz/readytype:HTTPtargetRef:group:net.gke.iokind:ServiceImportname:asm-ingressgatewayEOFApply the custom YAML file you created in the previous step to bothclusters:
kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/ingress-gateway-healthcheck.yamlEnable multi-cluster load balancing for the fleet, and designate
CLUSTER_1_NAMEas the configuration cluster:gcloudcontainerfleetingressenable\--config-membership=${CLUSTER_1_NAME}\--location=${CLUSTER_1_REGION}Grant IAM permissions for the Gateway controller in the fleet:
gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member"serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com"\--role"roles/container.admin"Create the load balancer YAML file through a Gateway custom resourcethat references the
gke-l7-global-external-managed-mcgatewayClassand the static IP address you created earlier:cat<<EOF >${WORKDIR}/frontend-gateway.yamlkind:GatewayapiVersion:gateway.networking.k8s.io/v1metadata:name:external-httpnamespace:asm-ingressannotations:networking.gke.io/certmap:mcg-cert-mapspec:gatewayClassName:gke-l7-global-external-managed-mclisteners:-name:http# list the port only so we can redirect any incoming http requests to httpsprotocol:HTTPport:80-name:httpsprotocol:HTTPSport:443allowedRoutes:kinds:-kind:HTTPRouteaddresses:-type:NamedAddressvalue:mcg-ipEOFApply the
frontend-gatewayYAML file to both clusters. OnlyCLUSTER_1_NAMEis authoritative unless you designate a differentconfiguration cluster as authoritative:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-gateway.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-gateway.yamlCreate an
HTTPRouteYAML file calleddefault-httproute.yamlthatinstructs the Gateway resource to send requests to the ingress gateways:cat <<EOF >${WORKDIR}/default-httproute.yamlapiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:default-httproutenamespace:asm-ingressspec:parentRefs:-name:external-httpnamespace:asm-ingresssectionName:httpsrules:-backendRefs:-group:net.gke.iokind:ServiceImportname:asm-ingressgatewayport:443EOFApply the
HTTPRouteYAML file you created in the previous step toboth clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/default-httproute.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/default-httproute.yamlTo perform HTTP to HTTP(S) redirects, create an additional
HTTPRouteYAML file calleddefault-httproute-redirect.yaml:cat <<EOF >${WORKDIR}/default-httproute-redirect.yamlkind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1metadata:name:http-to-https-redirect-httproutenamespace:asm-ingressspec:parentRefs:-name:external-httpnamespace:asm-ingresssectionName:httprules:-filters:-type:RequestRedirectrequestRedirect:scheme:httpsstatusCode:301EOFApply the redirect
HTTPRouteYAML file to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/default-httproute-redirect.yamlInspect the Gateway resource to check the progress of the load balancerdeployment:
kubectl--context=${CLUSTER_1_NAME}describegatewayexternal-http-nasm-ingressThe output shows the information you entered in this section.
Deploy the whereami sample application
This guide useswhereami as a sample application to provide direct feedback about which clusters arereplying to requests. The following section sets up two separate deployments ofwhereami across both clusters: afrontend deployment and abackenddeployment.
Thefrontend deployment is the first workload to receive the request. It thencalls thebackend deployment.
This model is used to demonstrate a multi-service application architecture.Bothfrontend andbackend services are deployed to both clusters.
In Cloud Shell, create the namespaces for a whereami
frontendand a whereamibackendacross both clusters and enablenamespace injection:kubectl--context=${CLUSTER_1_NAME}creatensfrontendkubectl--context=${CLUSTER_1_NAME}labelnamespacefrontendistio-injection=enabledkubectl--context=${CLUSTER_1_NAME}creatensbackendkubectl--context=${CLUSTER_1_NAME}labelnamespacebackendistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}creatensfrontendkubectl--context=${CLUSTER_2_NAME}labelnamespacefrontendistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}creatensbackendkubectl--context=${CLUSTER_2_NAME}labelnamespacebackendistio-injection=enabledCreate a kustomize variant for the whereami
backend:mkdir-p${WORKDIR}/whereami-backend/basecat<<EOF >${WORKDIR}/whereami-backend/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8sEOFmkdir${WORKDIR}/whereami-backend/variantcat<<EOF >${WORKDIR}/whereami-backend/variant/cm-flag.yamlapiVersion:v1kind:ConfigMapmetadata:name:whereamidata:BACKEND_ENABLED:"False"# assuming you don't want a chain of backend callsMETADATA:"backend"EOFcat<<EOF >${WORKDIR}/whereami-backend/variant/service-type.yamlapiVersion:"v1"kind:"Service"metadata:name:"whereami"spec:type:ClusterIPEOFcat<<EOF >${WORKDIR}/whereami-backend/variant/kustomization.yamlnameSuffix:"-backend"namespace:backendcommonLabels:app:whereami-backendresources:-../basepatches:-path:cm-flag.yamltarget:kind:ConfigMap-path:service-type.yamltarget:kind:ServiceEOFApply the whereami
backendvariant to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-k${WORKDIR}/whereami-backend/variantkubectl--context=${CLUSTER_2_NAME}apply-k${WORKDIR}/whereami-backend/variantCreate a kustomize variant for the whereami
frontend:mkdir-p${WORKDIR}/whereami-frontend/basecat<<EOF >${WORKDIR}/whereami-frontend/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8sEOFmkdirwhereami-frontend/variantcat<<EOF >${WORKDIR}/whereami-frontend/variant/cm-flag.yamlapiVersion:v1kind:ConfigMapmetadata:name:whereamidata:BACKEND_ENABLED:"True"BACKEND_SERVICE:"http://whereami-backend.backend.svc.cluster.local"EOFcat<<EOF >${WORKDIR}/whereami-frontend/variant/service-type.yamlapiVersion:"v1"kind:"Service"metadata:name:"whereami"spec:type:ClusterIPEOFcat<<EOF >${WORKDIR}/whereami-frontend/variant/kustomization.yamlnameSuffix:"-frontend"namespace:frontendcommonLabels:app:whereami-frontendresources:-../basepatches:-path:cm-flag.yamltarget:kind:ConfigMap-path:service-type.yamltarget:kind:ServiceEOFApply the whereami
frontendvariant to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-k${WORKDIR}/whereami-frontend/variantkubectl--context=${CLUSTER_2_NAME}apply-k${WORKDIR}/whereami-frontend/variantCreate a
VirtualServiceYAML file to route requests to the whereamifrontend:cat <<EOF >${WORKDIR}/frontend-vs.yamlapiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:whereami-vsnamespace:frontendspec:gateways:-asm-ingress/asm-ingressgatewayhosts:-'frontend.endpoints.PROJECT_ID.cloud.goog'http:-route:-destination:host:whereami-frontendport:number:80EOFApply the
frontend-vsYAML file to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-vs.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-vs.yamlNow that you have deployed
frontend-vs.yamlto both clusters, attemptto call the public endpoint for your clusters:curl-shttps://frontend.endpoints.PROJECT_ID.cloud.goog|jqThe output is similar to the following:
{ "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "8396338201253702608", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-645h", "pod_ip": "10.124.0.199", "pod_name": "whereami-backend-7cbdfd788-8mmnq", "pod_name_emoji": "📸", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-01", "gce_instance_id": "1047264075324910451", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-01-pool-2-d687e3c0-5kf2", "pod_ip": "10.54.1.71", "pod_name": "whereami-frontend-69c4c867cb-dgg8t", "pod_name_emoji": "🪴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-central1-c"}
-v flag in thecurl command in this stepto verify whether the error is related to TLS errors.If you run thecurl command a few times, you'll see that the responses (bothfromfrontend andbackend) come from different regions. In its response,the load balancer is providing geo-routing. That means theload balancer is routing requests from the client to the nearest active cluster,but the requests are still landing randomly. When requests occasionally go fromone region to another, it increases latency and cost.
In the next section, you implement locality load balancing in the service meshto keep requests local.
Enable and test locality load balancing for whereami
In this section, you implementlocality load balancing in the service mesh to keep requests local. You also perform some tests to seehow whereami handles various failure scenarios.
When you make a request to the whereamifrontend service, the load balancersends the request to the cluster with the lowest latency relative to theclient. That means the ingress gateway pods within the mesh loadbalance requests to whereamifrontend pods across both clusters. This sectionwill address that issue by enabling locality load balancing within the mesh.
DestinationRule examples in the following section set anartificially low value for regional failover. When adapting these samples foryour own purposes, test extensively to verify that you've found theappropriate value for your needs.In Cloud Shell, create a
DestinationRuleYAML file that enables locality load balancing regional failover to thefrontendservice:cat <<EOF >${WORKDIR}/frontend-dr.yamlapiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:frontendnamespace:frontendspec:host:whereami-frontend.frontend.svc.cluster.localtrafficPolicy:connectionPool:http:maxRequestsPerConnection:0loadBalancer:simple:LEAST_REQUESTlocalityLbSetting:enabled:trueoutlierDetection:consecutive5xxErrors:1interval:1sbaseEjectionTime:1mEOFThe preceding code sample only enables local routing for the
frontendservice. You also need an additional configuration that handles the backend.Apply the
frontend-drYAML file to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-dr.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-dr.yamlCreate a
DestinationRuleYAML file that enables locality loadbalancing regional failover to thebackendservice:cat <<EOF >${WORKDIR}/backend-dr.yamlapiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:backendnamespace:backendspec:host:whereami-backend.backend.svc.cluster.localtrafficPolicy:connectionPool:http:maxRequestsPerConnection:0loadBalancer:simple:LEAST_REQUESTlocalityLbSetting:enabled:trueoutlierDetection:consecutive5xxErrors:1interval:1sbaseEjectionTime:1mEOFApply the
backend-drYAML file to both clusters:kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/backend-dr.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/backend-dr.yamlWith both sets of
DestinationRuleYAML files applied to both clusters,requests remain local to the cluster that the request is routed to.To test failover for the
frontendservice, reduce the number ofreplicas for the ingress gateway in your primary cluster.From the perspective of the multi-regional load balancer, this actionsimulates a cluster failure. It causes that cluster to fail its loadbalancer health checks. This example uses the cluster in
CLUSTER_1_REGION.You should only see responses from the cluster inCLUSTER_2_REGION.Reduce the number of replicas for the ingress gateway in your primarycluster to zero and call the public endpoint to verify that requests have failed over tothe other cluster:
Note: While there is aHorizontalPodAutoscaler configured withkubectl--context=${CLUSTER_1_NAME}-nasm-ingressscale--replicas=0deployment/asm-ingressgatewayminReplicas:3for theasm-ingressgatewaydeployment,scaling is temporarily deactivated when the deployment's replica count isset to 0. For more information, seeHorizontal Pod Autoscaling - Implicit maintenance-mode deactivation.The output should resemble the following:
$ curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq{ "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "2717459599837162415", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-dxs2", "pod_ip": "10.124.1.7", "pod_name": "whereami-backend-7cbdfd788-mp8zv", "pod_name_emoji": "🏌🏽♀", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-02", "gce_instance_id": "6983018919754001204", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-02-pool-3-d42ddfbf-qmkn", "pod_ip": "10.124.1.142", "pod_name": "whereami-frontend-69c4c867cb-xf8db", "pod_name_emoji": "🏴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b"}To resume typical traffic routing, restore the ingress gateway replicasto the original value in the cluster:
kubectl--context=${CLUSTER_1_NAME}-nasm-ingressscale--replicas=3deployment/asm-ingressgatewaySimulate a failure for the
backendservice, by reducing the number ofreplicas in the primary region to 0:kubectl--context=${CLUSTER_1_NAME}-nbackendscale--replicas=0deployment/whereami-backendVerify that the responses from the
frontendservice come from theus-central1primary region through the load balancer, and the responsesfrom thebackendservice come from theus-east4secondary region.The output should also include a response for the
frontendservice fromthe primary region (us-central1), and a response for thebackendservicefrom the secondary region (us-east4), as expected.Restore the backend service replicas to the original value to resumetypical traffic routing:
kubectl--context=${CLUSTER_1_NAME}-nbackendscale--replicas=3deployment/whereami-backend
You now have a global HTTP(S) load balancer serving as a frontend to yourservice-mesh-hosted, multi-region application.
Clean up
To avoid incurring charges to your Google Cloud account for the resourcesused in this deployment, either delete the project that containsthe resources, or keep the project and delete the individual resources.
Delete the project
Delete the individual resources
If you want to keep the Google Cloud project you used in this deployment,delete the individual resources:
In Cloud Shell, delete the
HTTPRouteresources:kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/default-httproute.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/default-httproute.yamlDelete the GKE Gateway resources:
kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/frontend-gateway.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/frontend-gateway.yamlDelete the policies:
kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/cloud-armor-backendpolicy.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/cloud-armor-backendpolicy.yamlDelete the service exports:
kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/svc_export.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/svc_export.yamlDelete the Cloud Armor resources:
gcloud--project=PROJECT_IDcomputesecurity-policiesrulesdelete1000--security-policyedge-fw-policy--quietgcloud--project=PROJECT_IDcomputesecurity-policiesdeleteedge-fw-policy--quietDelete the Certificate Manager resources:
gcloud--project=PROJECT_IDcertificate-managermapsentriesdeletemcg-cert-map-entry--map="mcg-cert-map"--quietgcloud--project=PROJECT_IDcertificate-managermapsdeletemcg-cert-map--quietgcloud--project=PROJECT_IDcertificate-managercertificatesdeletemcg-cert--quietDelete the Endpoints DNS entry:
gcloud--project=PROJECT_IDendpointsservicesdelete"frontend.endpoints.PROJECT_ID.cloud.goog"--quietDelete the static IP address:
gcloud--project=PROJECT_IDcomputeaddressesdeletemcg-ip--global--quietDelete the GKE Autopilot clusters. This step takesseveral minutes.
gcloud--project=PROJECT_IDcontainerclustersdelete${CLUSTER_1_NAME}--region${CLUSTER_1_REGION}--quietgcloud--project=PROJECT_IDcontainerclustersdelete${CLUSTER_2_NAME}--region${CLUSTER_2_REGION}--quiet
What's next
- Learn aboutmore features offered by GKE Gateway that you can use with your service mesh.
- Learn aboutthe different types of Cloud Load Balancing available for GKE.
- Learn aboutthe features and functions offered by Cloud Service Mesh.
- For more reference architectures, diagrams, and best practices, explore theCloud Architecture Center.
Contributors
Authors:
- Alex Mattson | Application Specialist Engineer
- Mark Chilvers | Application Specialist Engineer
Other contributors:
- Abdelfettah Sghiouar | Cloud Developer Advocate
- Greg Bray | Customer Engineer
- Paul Revello | Cloud Solutions Architect
- Valavan Rajakumar | Key Enterprise Architect
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-30 UTC.