From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh

This document shows you how accomplish the following tasks:

This deployment guide is intended for platform administrators. It'salso intended for advanced practitioners who run Cloud Service Mesh.The instructions also work for Istio on GKE.

Architecture

The following diagram shows the default ingress topology of a service mesh—anexternal TCP/UDP load balancer that exposes the ingress gateway proxies on asingle cluster:

An external load balancer routes external clients to the mesh through ingress gateway proxies.

This deployment guide uses Google Kubernetes Engine (GKE) Gateway resources.Specifically, it uses amulti-cluster gateway to configure multi-region load balancing in front of multipleAutopilot clusters that are distributed across two regions.

TLS encryption from the client, a load balancer, and from the mesh.

The preceding diagram shows how data flows through cloud ingress and meshingress scenarios. For more information, see the explanation of thearchitecture diagram in the associated reference architecture document.

Objectives

  • Deploy a pair ofGKE Autopilot clusters on Google Cloud to the samefleet.
  • Deploy an Istio-basedCloud Service Mesh to the same fleet.
  • Configure a load balancer usingGKE Gateway to terminate public HTTPS traffic.
  • Direct public HTTPS traffic to applicationshosted by Cloud Service Mesh that are deployed across multipleclusters and regions.
  • Deploy thewhereami sample application to both Autopilot clusters.

Cost optimization

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use thepricing calculator.

New Google Cloud users might be eligible for afree trial.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, seeClean up.

Before you begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  2. Verify that billing is enabled for your Google Cloud project.

  3. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    You run all of the terminal commands for this deployment from Cloud Shell.

  4. Set your default Google Cloud project:

    exportPROJECT=YOUR_PROJECTexportPROJECT_NUMBER=$(gcloudprojectsdescribePROJECT_ID--format="value(projectNumber)")gcloudconfigsetprojectPROJECT_ID

    ReplacePROJECT_ID with the ID of the project thatyou want to use for this deployment.

  5. Create a working directory:

    mkdir-p${HOME}/edge-to-mesh-multi-regioncd${HOME}/edge-to-mesh-multi-regionexportWORKDIR=`pwd`

Create GKE clusters

In this section, you create GKE clusters to host the applications andsupporting infrastructure, which you create later in this deployment guide.

  1. In Cloud Shell, create a newkubeconfig file. This stepensures that you don't create a conflict with your existing (default)kubeconfig file.

    touchedge2mesh_mr_kubeconfigexportKUBECONFIG=${WORKDIR}/edge2mesh_mr_kubeconfig
  2. Define the environment variables that are used when creating theGKE clusters and the resources within them. Modify the default regionchoices to suit your purposes.

    exportCLUSTER_1_NAME=edge-to-mesh-01exportCLUSTER_2_NAME=edge-to-mesh-02exportCLUSTER_1_REGION=us-central1exportCLUSTER_2_REGION=us-east4exportPUBLIC_ENDPOINT=frontend.endpoints.PROJECT_ID.cloud.goog
  3. Enable the Google Cloud APIs that are used throughout this guide:

    gcloudservicesenable\container.googleapis.com\mesh.googleapis.com\gkehub.googleapis.com\multiclusterservicediscovery.googleapis.com\multiclusteringress.googleapis.com\trafficdirector.googleapis.com\certificatemanager.googleapis.com
  4. Create a GKE Autopilot clusterwith private nodes inCLUSTER_1_REGION. Use the--async flag to avoid waiting for thefirst cluster to provision and register to the fleet:

    gcloudcontainerclusterscreate-auto--async\${CLUSTER_1_NAME}--region${CLUSTER_1_REGION}\--release-channelrapid--labelsmesh_id=proj-${PROJECT_NUMBER}\--enable-private-nodes--enable-fleet
  5. Create and register a second Autopilot cluster inCLUSTER_2_REGION:

    gcloudcontainerclusterscreate-auto\${CLUSTER_2_NAME}--region${CLUSTER_2_REGION}\--release-channelrapid--labelsmesh_id=proj-${PROJECT_NUMBER}\--enable-private-nodes--enable-fleet
  6. Ensure that the clusters are running. It might take up to 20 minutes untilall clusters are running:

    gcloudcontainerclusterslist

    The output is similar to the following:

    NAME             LOCATION     MASTER_VERSION  MASTER_IP       MACHINE_TYPE  NODE_VERSION    NUM_NODES  STATUSedge-to-mesh-01  us-central1  1.27.5-gke.200  34.27.171.241   e2-small      1.27.5-gke.200             RUNNINGedge-to-mesh-02  us-east4     1.27.5-gke.200  35.236.204.156  e2-small      1.27.5-gke.200             RUNNING
  7. Gather the credentials forCLUSTER_1_NAME.You createdCLUSTER_1_NAMEasynchronously so you could run additional commands whilethe cluster provisioned.

    gcloudcontainerclustersget-credentials${CLUSTER_1_NAME}\--region${CLUSTER_1_REGION}
  8. To clarify the names of the Kubernetes contexts, rename them to thenames of the clusters:

    kubectlconfigrename-contextgke_PROJECT_ID_${CLUSTER_1_REGION}_${CLUSTER_1_NAME}${CLUSTER_1_NAME}kubectlconfigrename-contextgke_PROJECT_ID_${CLUSTER_2_REGION}_${CLUSTER_2_NAME}${CLUSTER_2_NAME}

Install a service mesh

In this section, you configure themanaged Cloud Service Mesh with fleet API.Using the fleet API to enable Cloud Service Mesh provides adeclarative approach to provision a service mesh.

  1. In Cloud Shell, enable Cloud Service Mesh on the fleet:

    gcloudcontainerfleetmeshenable
  2. Enable automatic control plane and data plane management:

    gcloudcontainerfleetmeshupdate\--managementautomatic\--memberships${CLUSTER_1_NAME},${CLUSTER_2_NAME}
  3. Wait about 20 minutes. Then verify that the control plane status isACTIVE:

    gcloudcontainerfleetmeshdescribe

    The output is similar to the following:

    createTime: '2023-11-30T19:23:21.713028916Z'membershipSpecs:  projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01:    mesh:      management: MANAGEMENT_AUTOMATIC  projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02:    mesh:      management: MANAGEMENT_AUTOMATICmembershipStates:  projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01:    servicemesh:      controlPlaneManagement:        details:        - code: REVISION_READY          details: 'Ready: asm-managed-rapid'        implementation: ISTIOD        state: ACTIVE      dataPlaneManagement:        details:        - code: OK          details: Service is running.        state: ACTIVE    state:     code: OK      description: |-        Revision ready for use: asm-managed-rapid.        All Canonical Services have been reconciled successfully.      updateTime: '2024-06-27T09:00:21.333579005Z'  projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02:    servicemesh:      controlPlaneManagement:        details:        - code: REVISION_READY          details: 'Ready: asm-managed-rapid'        implementation: ISTIOD        state: ACTIVE      dataPlaneManagement:        details:        - code: OK          details: Service is running.        state: ACTIVE    state:      code: OK      description: |-        Revision ready for use: asm-managed-rapid.        All Canonical Services have been reconciled successfully.      updateTime: '2024-06-27T09:00:24.674852751Z'name: projects/e2m-private-test-01/locations/global/features/servicemeshresourceState:  state: ACTIVEspec: {}updateTime: '2024-06-04T17:16:28.730429993Z'

Deploy an external Application Load Balancer and create ingress gateways

In this section, you deploy an external Application Load Balancer through theGKE Gateway controller and create ingress gateways for bothclusters. Thegateway andgatewayClass resources automate the provisioningof the load balancer and backend health checking. To provide TLS termination onthe load balancer, you createCertificate Manager resources and attach them to the load balancer. Additionally, you useEndpoints to automatically provision a public DNS name for the application.

Install an ingress gateway on both clusters

As a security best practice, we recommend that you deploy the ingress gatewayin a different namespace from the mesh control plane.

  1. In Cloud Shell, create a dedicatedasm-ingress namespace oneach cluster:

    kubectl--context=${CLUSTER_1_NAME}createnamespaceasm-ingresskubectl--context=${CLUSTER_2_NAME}createnamespaceasm-ingress
  2. Add a namespace label to theasm-ingress namespaces:

    kubectl--context=${CLUSTER_1_NAME}labelnamespaceasm-ingressistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}labelnamespaceasm-ingressistio-injection=enabled

    The output is similar to the following:

    namespace/asm-ingress labeled

    Labeling theasm-ingress namespaces withistio-injection=enabledinstructs Cloud Service Mesh to automatically inject Envoy sidecarproxies when a pod is deployed.

  3. Generate a self-signed certificate for future use:

    opensslreq-new-newkeyrsa:4096-days365-nodes-x509\-subj"/CN=frontend.endpoints.PROJECT_ID.cloud.goog/O=Edge2Mesh Inc"\-keyout${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\-out${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt

    The certificate provides an additional layer of encryption between theload balancer and the service mesh ingress gateways. It also enablessupport for HTTP/2-based protocols like gRPC. Instructions about how toattach the self-signed certificate to the ingress gateways are providedlater inCreate external IP address, DNS record, and TLS certificate resources.

    For more information about the requirements of the ingress gatewaycertificate, seeEncryption from the load balancer to the backends.

  4. Create a Kubernetes secret on each cluster to store the self-signedcertificate:

    kubectl--context${CLUSTER_1_NAME}-nasm-ingresscreatesecrettls\edge2mesh-credential\--key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\--cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crtkubectl--context${CLUSTER_2_NAME}-nasm-ingresscreatesecrettls\edge2mesh-credential\--key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key\--cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt
  5. To integrate with external Application Load Balancer, create akustomize variant to configure theingress gateway resources:

    mkdir-p${WORKDIR}/asm-ig/basecat<<EOF >${WORKDIR}/asm-ig/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/anthos-service-mesh-samples/docs/ingress-gateway-asm-manifests/baseEOFmkdir${WORKDIR}/asm-ig/variantcat<<EOF >${WORKDIR}/asm-ig/variant/role.yamlapiVersion:rbac.authorization.k8s.io/v1kind:Rolemetadata:name:asm-ingressgatewaynamespace:asm-ingressrules:-apiGroups:[""]resources:["secrets"]verbs:["get","watch","list"]EOFcat<<EOF >${WORKDIR}/asm-ig/variant/rolebinding.yamlapiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:asm-ingressgatewaynamespace:asm-ingressroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:asm-ingressgatewaysubjects:-kind:ServiceAccountname:asm-ingressgatewayEOFcat<<EOF >${WORKDIR}/asm-ig/variant/service-proto-type.yamlapiVersion:v1kind:Servicemetadata:name:asm-ingressgatewaynamespace:asm-ingressspec:ports:-name:status-portport:15021protocol:TCPtargetPort:15021-name:httpport:80targetPort:8080appProtocol:HTTP-name:httpsport:443targetPort:8443appProtocol:HTTP2type:ClusterIPEOFcat<<EOF >${WORKDIR}/asm-ig/variant/gateway.yamlapiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:asm-ingressgatewaynamespace:asm-ingressspec:servers:-port:number:443name:httpsprotocol:HTTPShosts:-"*"# IMPORTANT: Must use wildcard here when using SSL, as SNI isn't passed from GFEtls:mode:SIMPLEcredentialName:edge2mesh-credentialEOFcat<<EOF >${WORKDIR}/asm-ig/variant/kustomization.yamlnamespace:asm-ingressresources:-../base-role.yaml-rolebinding.yamlpatches:-path:service-proto-type.yamltarget:kind:Service-path:gateway.yamltarget:kind:GatewayEOF
  6. Apply the ingress gateway configuration to both clusters:

    kubectl--context${CLUSTER_1_NAME}apply-k${WORKDIR}/asm-ig/variantkubectl--context${CLUSTER_2_NAME}apply-k${WORKDIR}/asm-ig/variant

Expose ingress gateway pods to the load balancer using a multi-cluster service

In this section, you export the ingress gateway pods through aServiceExportcustom resource. You must export the ingress gateway podsthrough aServiceExport custom resource for the following reasons:

  1. In Cloud Shell, enable multi-cluster Services (MCS)for the fleet:

    gcloudcontainerfleetmulti-cluster-servicesenable
  2. Grant MCS the required IAM permissions to the project or fleet:

    gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member"serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]"\--role"roles/compute.networkViewer"
  3. Create theServiceExport YAML file:

    cat<<EOF >${WORKDIR}/svc_export.yamlkind:ServiceExportapiVersion:net.gke.io/v1metadata:name:asm-ingressgatewaynamespace:asm-ingressEOF
  4. Apply theServiceExport YAML file to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/svc_export.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/svc_export.yaml

    If you receive the following error message, wait a few moments forthe MCS custom resource definitions (CRDs) to install. Thenre-run the commands to apply theServiceExport YAML file to both clusters.

    error:resourcemappingnotfoundforname:"asm-ingressgateway"namespace:"asm-ingress"from"svc_export.yaml":nomatchesforkind"ServiceExport"inversion"net.gke.io/v1"ensureCRDsareinstalledfirst

Create external IP address, DNS record, and TLS certificate resources

In this section, you create networking resources that support theload-balancing resources that you create later in this deployment.

  1. In Cloud Shell, reserve a static external IP address:

    gcloudcomputeaddressescreatemcg-ip--global

    A static IP address is used by the GKE Gatewayresource. It lets the IP address remain the same, even if the external loadbalancer is recreated.

  2. Get the static IP address and store it as an environment variable:

    exportMCG_IP=$(gcloudcomputeaddressesdescribemcg-ip--global--format"value(address)")echo${MCG_IP}

    To create a stable, human-friendly mapping to your Gateway IP address,you must have a public DNS record.

    You can use any DNS provider and automation scheme that you want. Thisdeployment uses Endpoints instead of creating a managed DNSzone. Endpoints provides a freeGoogle-managed DNS record for an external IP address.

  3. Run the following command to create a YAML file nameddns-spec.yaml:

    cat<<EOF >${WORKDIR}/dns-spec.yamlswagger:"2.0"info:description:"Cloud Endpoints DNS"title:"Cloud Endpoints DNS"version:"1.0.0"paths:{}host:"frontend.endpoints.PROJECT_ID.cloud.goog"x-google-endpoints:-name:"frontend.endpoints.PROJECT_ID.cloud.goog"target:"${MCG_IP}"EOF

    Thedns-spec.yaml file defines the public DNS record in the form offrontend.endpoints.PROJECT_ID.cloud.goog, wherePROJECT_ID is yourunique project identifier.

  4. Deploy thedns-spec.yaml file to create the DNS entry. This processtakes a few minutes.

    gcloudendpointsservicesdeploy${WORKDIR}/dns-spec.yaml
  5. Create acertificate using Certificate Manager for the DNS entry name you created in the previous step:

    gcloudcertificate-managercertificatescreatemcg-cert\--domains="frontend.endpoints.PROJECT_ID.cloud.goog"

    A Google-managed TLS certificate is used to terminate inbound clientrequests at the load balancer.

  6. Create acertificate map:

    gcloudcertificate-managermapscreatemcg-cert-map

    The load balancer references the certificate through the certificate mapentry you create in the next step.

  7. Create acertificate map entry for the certificate you created earlier in this section:

    gcloudcertificate-managermapsentriescreatemcg-cert-map-entry\--map="mcg-cert-map"\--certificates="mcg-cert"\--hostname="frontend.endpoints.PROJECT_ID.cloud.goog"

Create backend service policies and load balancer resources

In this section you accomplish the following tasks;

  • Create a Cloud Armor security policy with rules.
  • Create a policy that lets the load balancer check the responsiveness ofthe ingress gateway pods through theServiceExport YAML file you createdearlier.
  • Use the GKE Gateway API to create a load balancerresource.
  • Use theGatewayClass custom resource to set the specific load balancertype.
  • Enable multi-cluster load balancing for the fleet and designate one ofthe clusters as theconfiguration cluster for the fleet.
  1. In Cloud Shell, create a Cloud Armor security policy:

    gcloudcomputesecurity-policiescreateedge-fw-policy\--description"Block XSS attacks"
  2. Create a rule for the security policy:

    gcloudcomputesecurity-policiesrulescreate1000\--security-policyedge-fw-policy\--expression"evaluatePreconfiguredExpr('xss-stable')"\--action"deny-403"\--description"XSS attack filtering"
  3. Create a YAML file for the security policy, and reference theServiceExport YAML file through a correspondingServiceImport YAML file:

    cat<<EOF >${WORKDIR}/cloud-armor-backendpolicy.yamlapiVersion:networking.gke.io/v1kind:GCPBackendPolicymetadata:name:cloud-armor-backendpolicynamespace:asm-ingressspec:default:securityPolicy:edge-fw-policytargetRef:group:net.gke.iokind:ServiceImportname:asm-ingressgatewayEOF
  4. Apply the Cloud Armor policy to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/cloud-armor-backendpolicy.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/cloud-armor-backendpolicy.yaml
  5. Create a custom YAML file that lets the load balancer perform healthchecks against the Envoy health endpoint (port15021 on path/healthz/ready) of the ingress gateway pods in both clusters:

    cat<<EOF >${WORKDIR}/ingress-gateway-healthcheck.yamlapiVersion:networking.gke.io/v1kind:HealthCheckPolicymetadata:name:ingress-gateway-healthchecknamespace:asm-ingressspec:default:config:httpHealthCheck:port:15021portSpecification:USE_FIXED_PORTrequestPath:/healthz/readytype:HTTPtargetRef:group:net.gke.iokind:ServiceImportname:asm-ingressgatewayEOF
  6. Apply the custom YAML file you created in the previous step to bothclusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/ingress-gateway-healthcheck.yaml
  7. Enable multi-cluster load balancing for the fleet, and designateCLUSTER_1_NAME as the configuration cluster:

    gcloudcontainerfleetingressenable\--config-membership=${CLUSTER_1_NAME}\--location=${CLUSTER_1_REGION}
  8. Grant IAM permissions for the Gateway controller in the fleet:

    gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member"serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com"\--role"roles/container.admin"
  9. Create the load balancer YAML file through a Gateway custom resourcethat references thegke-l7-global-external-managed-mcgatewayClass and the static IP address you created earlier:

    cat<<EOF >${WORKDIR}/frontend-gateway.yamlkind:GatewayapiVersion:gateway.networking.k8s.io/v1metadata:name:external-httpnamespace:asm-ingressannotations:networking.gke.io/certmap:mcg-cert-mapspec:gatewayClassName:gke-l7-global-external-managed-mclisteners:-name:http# list the port only so we can redirect any incoming http requests to httpsprotocol:HTTPport:80-name:httpsprotocol:HTTPSport:443allowedRoutes:kinds:-kind:HTTPRouteaddresses:-type:NamedAddressvalue:mcg-ipEOF
  10. Apply thefrontend-gateway YAML file to both clusters. OnlyCLUSTER_1_NAME is authoritative unless you designate a differentconfiguration cluster as authoritative:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-gateway.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-gateway.yaml
  11. Create anHTTPRoute YAML file calleddefault-httproute.yaml thatinstructs the Gateway resource to send requests to the ingress gateways:

    cat <<EOF >${WORKDIR}/default-httproute.yamlapiVersion:gateway.networking.k8s.io/v1kind:HTTPRoutemetadata:name:default-httproutenamespace:asm-ingressspec:parentRefs:-name:external-httpnamespace:asm-ingresssectionName:httpsrules:-backendRefs:-group:net.gke.iokind:ServiceImportname:asm-ingressgatewayport:443EOF
  12. Apply theHTTPRoute YAML file you created in the previous step toboth clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/default-httproute.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/default-httproute.yaml
  13. To perform HTTP to HTTP(S) redirects, create an additionalHTTPRouteYAML file calleddefault-httproute-redirect.yaml:

    cat <<EOF >${WORKDIR}/default-httproute-redirect.yamlkind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1metadata:name:http-to-https-redirect-httproutenamespace:asm-ingressspec:parentRefs:-name:external-httpnamespace:asm-ingresssectionName:httprules:-filters:-type:RequestRedirectrequestRedirect:scheme:httpsstatusCode:301EOF
  14. Apply the redirectHTTPRoute YAML file to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/default-httproute-redirect.yaml
  15. Inspect the Gateway resource to check the progress of the load balancerdeployment:

    kubectl--context=${CLUSTER_1_NAME}describegatewayexternal-http-nasm-ingress

    The output shows the information you entered in this section.

Deploy the whereami sample application

This guide useswhereami as a sample application to provide direct feedback about which clusters arereplying to requests. The following section sets up two separate deployments ofwhereami across both clusters: afrontend deployment and abackenddeployment.

Thefrontend deployment is the first workload to receive the request. It thencalls thebackend deployment.

This model is used to demonstrate a multi-service application architecture.Bothfrontend andbackend services are deployed to both clusters.

  1. In Cloud Shell, create the namespaces for a whereamifrontend and a whereamibackend across both clusters and enablenamespace injection:

    kubectl--context=${CLUSTER_1_NAME}creatensfrontendkubectl--context=${CLUSTER_1_NAME}labelnamespacefrontendistio-injection=enabledkubectl--context=${CLUSTER_1_NAME}creatensbackendkubectl--context=${CLUSTER_1_NAME}labelnamespacebackendistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}creatensfrontendkubectl--context=${CLUSTER_2_NAME}labelnamespacefrontendistio-injection=enabledkubectl--context=${CLUSTER_2_NAME}creatensbackendkubectl--context=${CLUSTER_2_NAME}labelnamespacebackendistio-injection=enabled
  2. Create a kustomize variant for the whereamibackend:

    mkdir-p${WORKDIR}/whereami-backend/basecat<<EOF >${WORKDIR}/whereami-backend/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8sEOFmkdir${WORKDIR}/whereami-backend/variantcat<<EOF >${WORKDIR}/whereami-backend/variant/cm-flag.yamlapiVersion:v1kind:ConfigMapmetadata:name:whereamidata:BACKEND_ENABLED:"False"# assuming you don't want a chain of backend callsMETADATA:"backend"EOFcat<<EOF >${WORKDIR}/whereami-backend/variant/service-type.yamlapiVersion:"v1"kind:"Service"metadata:name:"whereami"spec:type:ClusterIPEOFcat<<EOF >${WORKDIR}/whereami-backend/variant/kustomization.yamlnameSuffix:"-backend"namespace:backendcommonLabels:app:whereami-backendresources:-../basepatches:-path:cm-flag.yamltarget:kind:ConfigMap-path:service-type.yamltarget:kind:ServiceEOF
  3. Apply the whereamibackend variant to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-k${WORKDIR}/whereami-backend/variantkubectl--context=${CLUSTER_2_NAME}apply-k${WORKDIR}/whereami-backend/variant
  4. Create a kustomize variant for the whereamifrontend:

    mkdir-p${WORKDIR}/whereami-frontend/basecat<<EOF >${WORKDIR}/whereami-frontend/base/kustomization.yamlresources:-github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8sEOFmkdirwhereami-frontend/variantcat<<EOF >${WORKDIR}/whereami-frontend/variant/cm-flag.yamlapiVersion:v1kind:ConfigMapmetadata:name:whereamidata:BACKEND_ENABLED:"True"BACKEND_SERVICE:"http://whereami-backend.backend.svc.cluster.local"EOFcat<<EOF >${WORKDIR}/whereami-frontend/variant/service-type.yamlapiVersion:"v1"kind:"Service"metadata:name:"whereami"spec:type:ClusterIPEOFcat<<EOF >${WORKDIR}/whereami-frontend/variant/kustomization.yamlnameSuffix:"-frontend"namespace:frontendcommonLabels:app:whereami-frontendresources:-../basepatches:-path:cm-flag.yamltarget:kind:ConfigMap-path:service-type.yamltarget:kind:ServiceEOF
  5. Apply the whereamifrontend variant to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-k${WORKDIR}/whereami-frontend/variantkubectl--context=${CLUSTER_2_NAME}apply-k${WORKDIR}/whereami-frontend/variant
  6. Create aVirtualService YAML file to route requests to the whereamifrontend:

    cat <<EOF >${WORKDIR}/frontend-vs.yamlapiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:whereami-vsnamespace:frontendspec:gateways:-asm-ingress/asm-ingressgatewayhosts:-'frontend.endpoints.PROJECT_ID.cloud.goog'http:-route:-destination:host:whereami-frontendport:number:80EOF
  7. Apply thefrontend-vs YAML file to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-vs.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-vs.yaml
  8. Now that you have deployedfrontend-vs.yaml to both clusters, attemptto call the public endpoint for your clusters:

    curl-shttps://frontend.endpoints.PROJECT_ID.cloud.goog|jq

    The output is similar to the following:

    {  "backend_result": {    "cluster_name": "edge-to-mesh-02",    "gce_instance_id": "8396338201253702608",    "gce_service_account": "e2m-mcg-01.svc.id.goog",    "host_header": "whereami-backend.backend.svc.cluster.local",    "metadata": "backend",    "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-645h",    "pod_ip": "10.124.0.199",    "pod_name": "whereami-backend-7cbdfd788-8mmnq",    "pod_name_emoji": "📸",    "pod_namespace": "backend",    "pod_service_account": "whereami-backend",    "project_id": "e2m-mcg-01",    "timestamp": "2023-12-01T03:46:24",    "zone": "us-east4-b"  },  "cluster_name": "edge-to-mesh-01",  "gce_instance_id": "1047264075324910451",  "gce_service_account": "e2m-mcg-01.svc.id.goog",  "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog",  "metadata": "frontend",  "node_name": "gk3-edge-to-mesh-01-pool-2-d687e3c0-5kf2",  "pod_ip": "10.54.1.71",  "pod_name": "whereami-frontend-69c4c867cb-dgg8t",  "pod_name_emoji": "🪴",  "pod_namespace": "frontend",  "pod_service_account": "whereami-frontend",  "project_id": "e2m-mcg-01",  "timestamp": "2023-12-01T03:46:24",  "zone": "us-central1-c"}
Note: It takes around 20 minutes for the certificate to be provisioned. If youdon't receive a response, use the-v flag in thecurl command in this stepto verify whether the error is related to TLS errors.

If you run thecurl command a few times, you'll see that the responses (bothfromfrontend andbackend) come from different regions. In its response,the load balancer is providing geo-routing. That means theload balancer is routing requests from the client to the nearest active cluster,but the requests are still landing randomly. When requests occasionally go fromone region to another, it increases latency and cost.

In the next section, you implement locality load balancing in the service meshto keep requests local.

Enable and test locality load balancing for whereami

In this section, you implementlocality load balancing in the service mesh to keep requests local. You also perform some tests to seehow whereami handles various failure scenarios.

When you make a request to the whereamifrontend service, the load balancersends the request to the cluster with the lowest latency relative to theclient. That means the ingress gateway pods within the mesh loadbalance requests to whereamifrontend pods across both clusters. This sectionwill address that issue by enabling locality load balancing within the mesh.

Note: TheDestinationRule examples in the following section set anartificially low value for regional failover. When adapting these samples foryour own purposes, test extensively to verify that you've found theappropriate value for your needs.
  1. In Cloud Shell, create aDestinationRule YAML file that enables locality load balancing regional failover to thefrontend service:

    cat <<EOF >${WORKDIR}/frontend-dr.yamlapiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:frontendnamespace:frontendspec:host:whereami-frontend.frontend.svc.cluster.localtrafficPolicy:connectionPool:http:maxRequestsPerConnection:0loadBalancer:simple:LEAST_REQUESTlocalityLbSetting:enabled:trueoutlierDetection:consecutive5xxErrors:1interval:1sbaseEjectionTime:1mEOF

    The preceding code sample only enables local routing for thefrontendservice. You also need an additional configuration that handles the backend.

  2. Apply thefrontend-dr YAML file to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/frontend-dr.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/frontend-dr.yaml
  3. Create aDestinationRule YAML file that enables locality loadbalancing regional failover to thebackend service:

    cat <<EOF >${WORKDIR}/backend-dr.yamlapiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:backendnamespace:backendspec:host:whereami-backend.backend.svc.cluster.localtrafficPolicy:connectionPool:http:maxRequestsPerConnection:0loadBalancer:simple:LEAST_REQUESTlocalityLbSetting:enabled:trueoutlierDetection:consecutive5xxErrors:1interval:1sbaseEjectionTime:1mEOF
  4. Apply thebackend-dr YAML file to both clusters:

    kubectl--context=${CLUSTER_1_NAME}apply-f${WORKDIR}/backend-dr.yamlkubectl--context=${CLUSTER_2_NAME}apply-f${WORKDIR}/backend-dr.yaml

    With both sets ofDestinationRule YAML files applied to both clusters,requests remain local to the cluster that the request is routed to.

    To test failover for thefrontend service, reduce the number ofreplicas for the ingress gateway in your primary cluster.

    From the perspective of the multi-regional load balancer, this actionsimulates a cluster failure. It causes that cluster to fail its loadbalancer health checks. This example uses the cluster inCLUSTER_1_REGION.You should only see responses from the cluster inCLUSTER_2_REGION.

  5. Reduce the number of replicas for the ingress gateway in your primarycluster to zero and call the public endpoint to verify that requests have failed over tothe other cluster:

    kubectl--context=${CLUSTER_1_NAME}-nasm-ingressscale--replicas=0deployment/asm-ingressgateway
    Note: While there is aHorizontalPodAutoscaler configured withminReplicas:3 for theasm-ingressgateway deployment,scaling is temporarily deactivated when the deployment's replica count isset to 0. For more information, seeHorizontal Pod Autoscaling - Implicit maintenance-mode deactivation.

    The output should resemble the following:

    $ curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq{  "backend_result": {    "cluster_name": "edge-to-mesh-02",    "gce_instance_id": "2717459599837162415",    "gce_service_account": "e2m-mcg-01.svc.id.goog",    "host_header": "whereami-backend.backend.svc.cluster.local",    "metadata": "backend",    "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-dxs2",    "pod_ip": "10.124.1.7",    "pod_name": "whereami-backend-7cbdfd788-mp8zv",    "pod_name_emoji": "🏌🏽‍♀",    "pod_namespace": "backend",    "pod_service_account": "whereami-backend",    "project_id": "e2m-mcg-01",    "timestamp": "2023-12-01T05:41:18",    "zone": "us-east4-b"  },  "cluster_name": "edge-to-mesh-02",  "gce_instance_id": "6983018919754001204",  "gce_service_account": "e2m-mcg-01.svc.id.goog",  "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog",  "metadata": "frontend",  "node_name": "gk3-edge-to-mesh-02-pool-3-d42ddfbf-qmkn",  "pod_ip": "10.124.1.142",  "pod_name": "whereami-frontend-69c4c867cb-xf8db",  "pod_name_emoji": "🏴",  "pod_namespace": "frontend",  "pod_service_account": "whereami-frontend",  "project_id": "e2m-mcg-01",  "timestamp": "2023-12-01T05:41:18",  "zone": "us-east4-b"}
  6. To resume typical traffic routing, restore the ingress gateway replicasto the original value in the cluster:

    kubectl--context=${CLUSTER_1_NAME}-nasm-ingressscale--replicas=3deployment/asm-ingressgateway
  7. Simulate a failure for thebackend service, by reducing the number ofreplicas in the primary region to 0:

    kubectl--context=${CLUSTER_1_NAME}-nbackendscale--replicas=0deployment/whereami-backend

    Verify that the responses from thefrontend service come from theus-central1 primary region through the load balancer, and the responsesfrom thebackend service come from theus-east4 secondary region.

    The output should also include a response for thefrontend service fromthe primary region (us-central1), and a response for thebackend servicefrom the secondary region (us-east4), as expected.

  8. Restore the backend service replicas to the original value to resumetypical traffic routing:

    kubectl--context=${CLUSTER_1_NAME}-nbackendscale--replicas=3deployment/whereami-backend

You now have a global HTTP(S) load balancer serving as a frontend to yourservice-mesh-hosted, multi-region application.

Clean up

To avoid incurring charges to your Google Cloud account for the resourcesused in this deployment, either delete the project that containsthe resources, or keep the project and delete the individual resources.

Delete the project

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

Delete the individual resources

If you want to keep the Google Cloud project you used in this deployment,delete the individual resources:

  1. In Cloud Shell, delete theHTTPRoute resources:

    kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/default-httproute-redirect.yamlkubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/default-httproute.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/default-httproute.yaml
  2. Delete the GKE Gateway resources:

    kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/frontend-gateway.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/frontend-gateway.yaml
  3. Delete the policies:

    kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/ingress-gateway-healthcheck.yamlkubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/cloud-armor-backendpolicy.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/cloud-armor-backendpolicy.yaml
  4. Delete the service exports:

    kubectl--context=${CLUSTER_1_NAME}delete-f${WORKDIR}/svc_export.yamlkubectl--context=${CLUSTER_2_NAME}delete-f${WORKDIR}/svc_export.yaml
  5. Delete the Cloud Armor resources:

    gcloud--project=PROJECT_IDcomputesecurity-policiesrulesdelete1000--security-policyedge-fw-policy--quietgcloud--project=PROJECT_IDcomputesecurity-policiesdeleteedge-fw-policy--quiet
  6. Delete the Certificate Manager resources:

    gcloud--project=PROJECT_IDcertificate-managermapsentriesdeletemcg-cert-map-entry--map="mcg-cert-map"--quietgcloud--project=PROJECT_IDcertificate-managermapsdeletemcg-cert-map--quietgcloud--project=PROJECT_IDcertificate-managercertificatesdeletemcg-cert--quiet
  7. Delete the Endpoints DNS entry:

    gcloud--project=PROJECT_IDendpointsservicesdelete"frontend.endpoints.PROJECT_ID.cloud.goog"--quiet
  8. Delete the static IP address:

    gcloud--project=PROJECT_IDcomputeaddressesdeletemcg-ip--global--quiet
  9. Delete the GKE Autopilot clusters. This step takesseveral minutes.

    gcloud--project=PROJECT_IDcontainerclustersdelete${CLUSTER_1_NAME}--region${CLUSTER_1_REGION}--quietgcloud--project=PROJECT_IDcontainerclustersdelete${CLUSTER_2_NAME}--region${CLUSTER_2_REGION}--quiet

What's next

Contributors

Authors:

Other contributors:

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-06-30 UTC.