Deploy Windows applications on managed Kubernetes Stay organized with collections Save and categorize content based on your preferences.
This document describes how you deploy the reference architecture inManage and scale networking for Windows applications that run on managed Kubernetes.
These instructions are intended for cloud architects, network administrators,and IT professionals who are responsible for the design and management of Windowsapplications that run onGoogle Kubernetes Engine (GKE) clusters.
Architecture
The following diagram shows the reference architecture that you use when youdeploy Windows applications that run on managed GKE clusters.
As shown in the preceding diagram, an arrow represents the workflow formanaging networking for Windows applications that run onGKE using Cloud Service Mesh and Envoy gateways. Theregional GKE cluster includes both Windows and Linux node pools.Cloud Service Mesh creates and manages traffic routes to the Windows Pods.
Objectives
- Create and set up a GKE cluster to run Windows applications andEnvoy proxies.
- Deploy and verify the Windows applications.
- Configure Cloud Service Mesh as the control plane for the Envoygateways.
- Use theKubernetes Gateway API to provision the internal Application Load Balancer and expose the Envoy gateways.
- Understand the continual deployment operations you created.
Costs
Deployment of this architecture uses the following billable components ofGoogle Cloud:
When you finish this deployment, you can avoid continued billing by deletingthe resources that you created. For more information, seeClean up.
Before you begin
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Cloud Shell, and Cloud Service Mesh APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, activate Cloud Shell.
If running in a shared Virtual Private Cloud (VPC) environment, you also needto follow theinstructions to manually create the proxy-only subnet and firewall rule for the Cloud Load Balancing responsiveness checks.
Create a GKE cluster
Use the following steps to create a GKE cluster. You use theGKE cluster to contain and run the Windows applications and Envoy proxies inthis deployment.
In Cloud Shell, run the following Google Cloud CLI command to create aregional GKE cluster with one node in each of the threeregions:
gcloudcontainerclusterscreatemy-cluster--enable-ip-alias\--num-nodes=1\--release-channelstable\--enable-dataplane-v2\--regionus-central1\--scopes=cloud-platform\--gateway-api=standardAdd the Windows node pool to the GKE cluster:
gcloudcontainernode-poolscreatewin-pool\--cluster=my-cluster\--image-type=windows_ltsc_containerd\--no-enable-autoupgrade\--region=us-central1\--num-nodes=1\--machine-type=n1-standard-2\--windows-os-version=ltsc2019This operation might take around 20 minutes to complete.
Store your Google Cloud project ID in an environment variable:
exportPROJECT_ID=$(gcloudconfiggetproject)Connect to the GKE cluster:
gcloudcontainerclustersget-credentialsmy-cluster--regionus-central1List all the nodes in the GKE cluster:
kubectlgetnodesThe output should display three Linux nodes and three Windows nodes.
After the GKE cluster is ready, you can deploy two Windows-based testapplications.
Deploy two test applications
In this section, you deploy two Windows-based test applications. Both testapplications print the hostname that the application runs on. You also create aKubernetes Service to expose the application through standalone network endpointgroups (NEGs).
When you deploy a Windows-based application and a Kubernetes Service on aregional cluster, it creates a NEG for each zone in which the application runs.Later, this deployment guide discusses how you can configure these NEGs asbackends for Cloud Service Mesh services.
In Cloud Shell, apply the following YAML file with
kubectlto deploy the first test application. This command deploys three instances ofthe test application, one in each regional zone.apiVersion:apps/v1kind:Deploymentmetadata:labels:app:win-webserver-1name:win-webserver-1spec:replicas:3selector:matchLabels:app:win-webserver-1template:metadata:labels:app:win-webserver-1name:win-webserver-1spec:containers:-name:windowswebserverimage:k8s.gcr.io/e2e-test-images/agnhost:2.36command:["/agnhost"]args:["netexec","--http-port","80"]topologySpreadConstraints:-maxSkew:1topologyKey:kubernetes.io/hostnamewhenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:win-webserver-1nodeSelector:kubernetes.io/os:windowsApply the matching Kubernetes Service and expose it with a NEG:
apiVersion:v1kind:Servicemetadata:name:win-webserver-1annotations:cloud.google.com/neg:'{"exposed_ports": {"80":{"name": "win-webserver-1"}}}'spec:type:ClusterIPselector:app:win-webserver-1ports:-name:httpprotocol:TCPport:80targetPort:80Verify the deployment:
kubectlgetpodsThe output shows that the application has three running Windows Pods.
NAME READY STATUS RESTARTS AGEwin-webserver-1-7bb4c57f6d-hnpgd 1/1 Running 0 5m58swin-webserver-1-7bb4c57f6d-rgqsb 1/1 Running 0 5m58swin-webserver-1-7bb4c57f6d-xp7ww 1/1 Running 0 5m58s
Verify that the Kubernetes Service was created:
$kubectlgetsvcThe output resembles the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.64.0.1
443/TCP 58mwin-webserver-1 ClusterIP 10.64.6.20 80/TCP 3m35s Run the
describecommand forkubectlto verify that corresponding NEGswere created for the Kubernetes Service in each of the zones in which theapplication runs:$kubectldescribeservicewin-webserver-1The output resembles the following:
Name: win-webserver-1Namespace: defaultLabels:
Annotations: cloud.google.com/neg: {"exposed_ports": {"80":{"name": "win-webserver-1"}}} cloud.google.com/neg-status: {"network_endpoint_groups":{"80":"win-webserver-1"},"zones":["us-central1-a","us-central1-b","us-central1-c"]}Selector: app=win-webserver-1Type: ClusterIPIP Family Policy: SingleStackIP Families: IPv4IP: 10.64.6.20IPs: 10.64.6.20Port: http 80/TCPTargetPort: 80/TCPEndpoints: 10.60.3.5:80,10.60.4.5:80,10.60.5.5:80Session Affinity: NoneEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Create 4m25s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-a". Normal Create 4m18s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-b". Normal Create 4m11s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-c". Normal Attach 4m9s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-a") Normal Attach 4m8s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-c") Normal Attach 4m8s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-b") The output from the preceding command shows you that a NEG was createdfor each zone.
Optional: Use gcloud CLI to verify that the NEGs were created:
gcloudcomputenetwork-endpoint-groupslistThe output is as follows:
NAME LOCATION ENDPOINT_TYPE SIZEwin-webserver-1 us-central1-a GCE_VM_IP_PORT 1win-webserver-1 us-central1-b GCE_VM_IP_PORT 1win-webserver-1 us-central1-c GCE_VM_IP_PORT 1
To deploy the second test application, apply the following YAML file:
apiVersion:apps/v1kind:Deploymentmetadata:labels:app:win-webserver-2name:win-webserver-2spec:replicas:3selector:matchLabels:app:win-webserver-2template:metadata:labels:app:win-webserver-2name:win-webserver-2spec:containers:-name:windowswebserverimage:k8s.gcr.io/e2e-test-images/agnhost:2.36command:["/agnhost"]args:["netexec","--http-port","80"]topologySpreadConstraints:-maxSkew:1topologyKey:kubernetes.io/hostnamewhenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:win-webserver-2nodeSelector:kubernetes.io/os:windowsCreate the corresponding Kubernetes Service:
apiVersion:v1kind:Servicemetadata:name:win-webserver-2annotations:cloud.google.com/neg:'{"exposed_ports": {"80":{"name": "win-webserver-2"}}}'spec:type:ClusterIPselector:app:win-webserver-2ports:-name:httpprotocol:TCPport:80targetPort:80Verify the application deployment:
kubectlgetpodsCheck the output and verify that there are three running Pods.
Verify that the Kubernetes Service and three NEGs were created:
kubectldescribeservicewin-webserver-2
Configure Cloud Service Mesh
In this section, Cloud Service Mesh is configured as the control planefor the Envoy gateways.
You map the Envoy gateways to the relevant Cloud Service Mesh routingconfiguration by specifying thescope_name parameter. Thescope_nameparameter lets you configure different routing rules for the different Envoygateways.
In Cloud Shell, create a firewall rule that allows incoming trafficfrom the Google services that are checking application responsiveness:
gcloudcomputefirewall-rulescreateallow-health-checks\--network=default\--direction=INGRESS\--action=ALLOW\--rules=tcp\--source-ranges="35.191.0.0/16,130.211.0.0/22,209.85.152.0/22,209.85.204.0/22"Check the responsiveness of the first application:
gcloudcomputehealth-checkscreatehttpwin-app-1-health-check\--enable-logging\--request-path="/healthz"\--use-serving-portCheck the responsiveness of the second application:
gcloudcomputehealth-checkscreatehttpwin-app-2-health-check\--enable-logging\--request-path="/healthz"\--use-serving-portCreate a Cloud Service Mesh backend service for the firstapplication:
gcloudcomputebackend-servicescreatewin-app-1-service\--global\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--port-name=http\--health-checkswin-app-1-health-checkCreate a Cloud Service Mesh backend service for the secondapplication:
gcloudcomputebackend-servicescreatewin-app-2-service\--global\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--port-name=http\--health-checkswin-app-2-health-checkAdd the NEGs you created previously. These NEGs are associated with thefirst application you created as a backend to theCloud Service Mesh backend service. This code sample adds one NEGfor each zone in the regional cluster you created.
BACKEND_SERVICE=win-app-1-serviceAPP1_NEG_NAME=win-webserver-1MAX_RATE_PER_ENDPOINT=10gcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-b\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-a\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-c\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTAdd additional NEGs. These NEGs are associated with the secondapplication you created as a backend to the Cloud Service Meshbackend service. This code sample adds one NEG for each zone in theregional cluster you created.
BACKEND_SERVICE=win-app-2-serviceAPP2_NEG_NAME=win-webserver-2gcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-b\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-a\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-c\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINT
Configure additional Cloud Service Mesh resources
Now that you've configured the Cloud Service Mesh services, you needto configure two additional resources to complete yourCloud Service Mesh setup.
First, these steps show how to configure aGateway resource. AGateway resource is a virtual resource that's used to generateCloud Service Mesh routing rules. Cloud Service Mesh routingrules are used to configure Envoy proxies as gateways.
Next, the steps show how to configure anHTTPRoute resource for each of the backend services. TheHTTPRoute resource maps HTTPrequests to the relevant backend service.
In Cloud Shell, create a YAML file called
gateway.yamlthatdefines theGatewayresource:cat<<EOF>gateway.yamlname:gateway80scope:gateway-proxyports:-8080type:OPEN_MESHEOFCreate the
Gatewayresource by invoking thegateway.yamlfile:gcloudnetwork-servicesgatewaysimportgateway80\--source=gateway.yaml\--location=globalThe
Gatewayname will beprojects/$PROJECT_ID/locations/global/gateways/gateway80.You use this
Gatewayname when you createHTTPRoutesfor eachbackend service.
Create theHTTPRoutes for each backend service:
In Cloud Shell, store your Google Cloud project ID in anenvironment variable:
exportPROJECT_ID=$(gcloudconfiggetproject)Create the
HTTPRouteYAML file for the first application:cat<<EOF>win-app-1-route.yamlname:win-app-1-http-routehostnames:-win-app-1gateways:-projects/$PROJECT_ID/locations/global/gateways/gateway80rules:-action:destinations:-serviceName:"projects/$PROJECT_ID/locations/global/backendServices/win-app-1-service"EOFCreate the
HTTPRouteresource for the first application:gcloudnetwork-serviceshttp-routesimportwin-app-1-http-route\--source=win-app-1-route.yaml\--location=globalCreate the
HTTPRouteYAML file for the second application:cat<<EOF>win-app-2-route.yamlname:win-app-2-http-routehostnames:-win-app-2gateways:-projects/$PROJECT_ID/locations/global/gateways/gateway80rules:-action:destinations:-serviceName:"projects/$PROJECT_ID/locations/global/backendServices/win-app-2-service"EOFCreate the
HTTPRouteresource for the second application:gcloudnetwork-serviceshttp-routesimportwin-app-2-http-route\--source=win-app-2-route.yaml\--location=global
Deploy and expose the Envoy gateways
After you create the two Windows-based test applications and theCloud Service Mesh, you deploy the Envoy gateways by creatinga deployment YAML file. The deployment YAML file accomplishes the followingtasks:
- Bootstraps the Envoy gateways.
- Configures the Envoy gateways to use Cloud Service Mesh as itscontrol plane.
- Configures the Envoy gateways to use
HTTPRoutesfor the gateway namedGateway80.
Deploy two replica Envoy gateways. This approach helps to make the gatewaysfault tolerant and provides redundancy. To automatically scale the Envoygateways based on load, you can optionally configure a Horizontal PodAutoscaler. If you decide to configure a Horizontal Pod Autoscaler, you mustfollow the instructions inConfiguring horizontal Pod autoscaling.
In Cloud Shell, create a YAML file:
apiVersion:apps/v1kind:Deploymentmetadata:creationTimestamp:nulllabels:app:td-envoy-gatewayname:td-envoy-gatewayspec:replicas:2selector:matchLabels:app:td-envoy-gatewaytemplate:metadata:creationTimestamp:nulllabels:app:td-envoy-gatewayspec:containers:-name:envoyimage:envoyproxy/envoy:v1.21.6imagePullPolicy:Alwaysresources:limits:cpu:"2"memory:1Girequests:cpu:100mmemory:128Mienv:-name:ENVOY_UIDvalue:"1337"volumeMounts:-mountPath:/etc/envoyname:envoy-bootstrapinitContainers:-name:td-bootstrap-writerimage:gcr.io/trafficdirector-prod/xds-client-bootstrap-generatorimagePullPolicy:Alwaysargs:---project_number='my_project_number'---scope_name='gateway-proxy'---envoy_port=8080---bootstrap_file_output_path=/var/lib/data/envoy.yaml---traffic_director_url=trafficdirector.googleapis.com:443---expose_stats_port=15005volumeMounts:-mountPath:/var/lib/dataname:envoy-bootstrapvolumes:-name:envoy-bootstrapemptyDir:{}Replacemy_project_number with your project number.
- You can find your project number by running the following command:
gcloudprojectsdescribe$(gcloudconfiggetproject)--format="value(projectNumber)"
Port
15005is used to expose the Envoy Admin endpoint named/stats. It'salso used for the following purposes:- As a responsiveness endpoint from the internal Application Load Balancer.
- As a way to consume Google Cloud Managed Service for Prometheus metrics from Envoy.
When the two Envoy Gateway Pods are running, create a service of type
ClusterIPto expose them. You must also create a YAML file calledBackendConfig.BackendConfigdefines a non-standard responsiveness check. That check isused to verify the responsiveness of the Envoy gateways.To create the backend configuration with a non-standard responsivenesscheck, create a YAML file called
envoy-backendconfig:apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:envoy-backendconfigspec:healthCheck:checkIntervalSec:5timeoutSec:5healthyThreshold:2unhealthyThreshold:3type:HTTPrequestPath:/statsport:15005The responsiveness check will use the
/statsendpoint on port15005tocontinuously check the responsiveness of the Envoy gateways.Create the Envoy gateways service:
apiVersion:v1kind:Servicemetadata:name:td-envoy-gatewayannotations:cloud.google.com/backend-config:'{"default": "envoy-backendconfig"}'spec:type:ClusterIPselector:app:td-envoy-gatewayports:-name:httpprotocol:TCPport:8080targetPort:8080-name:statsprotocol:TCPport:15005targetPort:15005View the Envoy gateways service you created:
kubectlgetsvctd-envoy-gateway
Create the Kubernetes Gateway resource
Creating the Kubernetes Gateway resource provisions the internal Application Load Balancer toexpose the Envoy gateways.
Before creating that resource, you must create two sample self-signedcertificates and then import them into the GKE cluster as KubernetesSecrets. The certificates enable the following gateway architecture:
- Each application is served over HTTPS.
- Each application uses a dedicated certificate.
When using self-managed certificates, the internal Application Load Balancer can use up to themaximum limit of certificates to expose applications with different fully qualified domain names.
To create the certificates useopenssl.
In Cloud Shell, generate a configuration file for the firstcertificate:
cat<<EOF>CONFIG_FILE[req]default_bits=2048req_extensions=extension_requirementsdistinguished_name=dn_requirementsprompt=no[extension_requirements]basicConstraints=CA:FALSEkeyUsage=nonRepudiation,digitalSignature,keyEnciphermentsubjectAltName=@sans_list[dn_requirements]0.organizationName=examplecommonName=win-webserver-1.example.com[sans_list]DNS.1=win-webserver-1.example.comEOFGenerate a private key for the first certificate:
opensslgenrsa-outsample_private_key2048Generate a certificate request:
opensslreq-new-keysample_private_key-outCSR_FILE-configCONFIG_FILESign and generate the first certificate:
opensslx509-req-signkeysample_private_key-inCSR_FILE-outsample.crt-extfileCONFIG_FILE-extensionsextension_requirements-days90Generate a configuration file for the second certificate:
cat<<EOF>CONFIG_FILE2[req]default_bits=2048req_extensions=extension_requirementsdistinguished_name=dn_requirementsprompt=no[extension_requirements]basicConstraints=CA:FALSEkeyUsage=nonRepudiation,digitalSignature,keyEnciphermentsubjectAltName=@sans_list[dn_requirements]0.organizationName=examplecommonName=win-webserver-2.example.com[sans_list]DNS.1=win-webserver-2.example.comEOFGenerate a private key for the second certificate:
opensslgenrsa-outsample_private_key22048Generate a certificate request:
opensslreq-new-keysample_private_key2-outCSR_FILE2-configCONFIG_FILE2Sign and generate the second certificate:
opensslx509-req-signkeysample_private_key2-inCSR_FILE2-outsample2.crt-extfileCONFIG_FILE2-extensionsextension_requirements-days90
Import certificates as Kubernetes Secrets
In this section, you accomplish the following tasks:
- Import the self-signed certificates into the GKE cluster asKubernetes Secrets.
- Create a static IP address for an internal VPC.
- Create the Kubernetes Gateway API resource.
- Verify that the certificates work.
In Cloud Shell, import the first certificate as a KubernetesSecret:
kubectlcreatesecrettlssample-cert--certsample.crt--keysample_private_keyImport the second certificate as a Kubernetes Secret:
kubectlcreatesecrettlssample-cert-2--certsample2.crt--keysample_private_key2To enable internal Application Load Balancer, create a static IP address on the internalVPC:
gcloudcomputeaddressescreatesample-ingress-ip--regionus-central1--subnetdefaultCreate the Kubernetes Gateway API resource YAML file:
kind:GatewayapiVersion:gateway.networking.k8s.io/v1beta1metadata:name:internal-httpsspec:gatewayClassName:gke-l7-rilbaddresses:-type:NamedAddressvalue:sample-ingress-iplisteners:-name:httpsprotocol:HTTPSport:443tls:mode:TerminatecertificateRefs:-name:sample-cert-name:sample-cert-2By default, a Kubernetes Gateway has no default routes. The gatewayreturns a page not found (404) error when requests are sent to it.
Configure a default
routeYAML file for the Kubernetes Gateway that passes all incoming requests to the Envoy gateways:kind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1beta1metadata:name:envoy-default-backendspec:parentRefs:-kind:Gatewayname:internal-httpsrules:-backendRefs:-name:td-envoy-gatewayport:8080Verify the full flow by sending HTTP requests to both applications. Toverify that the Envoy gateways route traffic to the correct applicationPods, inspect the HTTP Host header.
Find and store the Kubernetes Gateway IP address in an environmentvariable:
exportEXTERNAL_IP=$(kubectlgetgatewayinternal-https-ojson|jq.status.addresses[0].value-r)Send a request to the first application:
curl--insecure-H"Host: win-app-1"https://$EXTERNAL_IP/hostNameSend a request to the second application:
curl--insecure-H"Host: win-app-2"https://$EXTERNAL_IP/hostNameVerify that the hostname returned from the request matches the Podsrunning
win-app-1andwin-app-2:kubectlgetpodsThe output should display
win-app-1andwin-app-2.
Monitor Envoy gateways
Monitor your Envoy gateways withGoogle Cloud Managed Service for Prometheus.
Google Cloud Managed Service for Prometheus should be enabled by default on the cluster that you created earlier.
In Cloud Shell, create a
PodMonitoringresource by applyingthe following YAML file:apiVersion:monitoring.googleapis.com/v1kind:PodMonitoringmetadata:name:prom-envoyspec:selector:matchLabels:app:td-envoy-gatewayendpoints:-port:15005interval:30spath:/stats/prometheusAfter applying the YAML file, the system begins to collectGoogle Cloud Managed Service for Prometheus metrics in a dashboard.
To create the Google Cloud Managed Service for Prometheus metrics dashboard, followthese instructions:
- Sign in to the Google Cloud console.
- Open the menu.
- ClickOperations > Monitoring > Dashboards.
To import the dashboard, follow these instructions:
- On the Dashboards screen, clickSample Library.
- Enter envoy in the filter box.
- ClickIstio Envoy Prometheus Overview.
- Select the checkbox.
- ClickImport and then clickConfirm to import the dashboard.
To view the dashboard, follow these instructions:
- ClickDashboard List.
- SelectIntegrations.
- ClickIstio Envoy Prometheus Overview to view the dashboard.
You can now see the most important metrics of your Envoy gateways. You can alsoconfigure alerts based on your criteria. Before you clean up, send a few moretest requests to the applications and see how the dashboard updates with thelatest metrics.
Clean up
To avoid incurring charges to your Google Cloud account for the resourcesused in this deployment, either delete the project that containsthe resources, or keep the project and delete the individual resources.
Delete the project
What's next
- Learn more about the Google Cloud products used in thisdeployment guide:
- For more reference architectures, diagrams, and best practices, explore theCloud Architecture Center.
Contributors
Author:Eitan Eibschutz | Staff Technical Solutions Consultant
Other contributors:
- John Laham | Solutions Architect
- Kaslin Fields | Developer Advocate
- Maridi (Raju) Makaraju | Supportability Tech Lead
- Valavan Rajakumar | Key Enterprise Architect
- Victor Moreno | Product Manager, Cloud Networking
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-08-14 UTC.