Deploy Windows applications on managed Kubernetes

Last reviewed 2024-08-14 UTC

This document describes how you deploy the reference architecture inManage and scale networking for Windows applications that run on managed Kubernetes.

These instructions are intended for cloud architects, network administrators,and IT professionals who are responsible for the design and management of Windowsapplications that run onGoogle Kubernetes Engine (GKE) clusters.

Architecture

The following diagram shows the reference architecture that you use when youdeploy Windows applications that run on managed GKE clusters.

Data flows through an internal Application Load Balancer and an Envoy gateway.

As shown in the preceding diagram, an arrow represents the workflow formanaging networking for Windows applications that run onGKE using Cloud Service Mesh and Envoy gateways. Theregional GKE cluster includes both Windows and Linux node pools.Cloud Service Mesh creates and manages traffic routes to the Windows Pods.

Objectives

  • Create and set up a GKE cluster to run Windows applications andEnvoy proxies.
  • Deploy and verify the Windows applications.
  • Configure Cloud Service Mesh as the control plane for the Envoygateways.
  • Use theKubernetes Gateway API to provision the internal Application Load Balancer and expose the Envoy gateways.
  • Understand the continual deployment operations you created.

Costs

Deployment of this architecture uses the following billable components ofGoogle Cloud:

When you finish this deployment, you can avoid continued billing by deletingthe resources that you created. For more information, seeClean up.

Before you begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  2. Verify that billing is enabled for your Google Cloud project.

  3. Enable the Cloud Shell, and Cloud Service Mesh APIs.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the APIs

  4. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

If running in a shared Virtual Private Cloud (VPC) environment, you also needto follow theinstructions to manually create the proxy-only subnet and firewall rule for the Cloud Load Balancing responsiveness checks.

Create a GKE cluster

Use the following steps to create a GKE cluster. You use theGKE cluster to contain and run the Windows applications and Envoy proxies inthis deployment.

  1. In Cloud Shell, run the following Google Cloud CLI command to create aregional GKE cluster with one node in each of the threeregions:

    gcloudcontainerclusterscreatemy-cluster--enable-ip-alias\--num-nodes=1\--release-channelstable\--enable-dataplane-v2\--regionus-central1\--scopes=cloud-platform\--gateway-api=standard
  2. Add the Windows node pool to the GKE cluster:

    gcloudcontainernode-poolscreatewin-pool\--cluster=my-cluster\--image-type=windows_ltsc_containerd\--no-enable-autoupgrade\--region=us-central1\--num-nodes=1\--machine-type=n1-standard-2\--windows-os-version=ltsc2019

    This operation might take around 20 minutes to complete.

  3. Store your Google Cloud project ID in an environment variable:

    exportPROJECT_ID=$(gcloudconfiggetproject)
  4. Connect to the GKE cluster:

    gcloudcontainerclustersget-credentialsmy-cluster--regionus-central1
  5. List all the nodes in the GKE cluster:

    kubectlgetnodes

    The output should display three Linux nodes and three Windows nodes.

    After the GKE cluster is ready, you can deploy two Windows-based testapplications.

Deploy two test applications

In this section, you deploy two Windows-based test applications. Both testapplications print the hostname that the application runs on. You also create aKubernetes Service to expose the application through standalone network endpointgroups (NEGs).

When you deploy a Windows-based application and a Kubernetes Service on aregional cluster, it creates a NEG for each zone in which the application runs.Later, this deployment guide discusses how you can configure these NEGs asbackends for Cloud Service Mesh services.

  1. In Cloud Shell, apply the following YAML file withkubectl to deploy the first test application. This command deploys three instances ofthe test application, one in each regional zone.

    apiVersion:apps/v1kind:Deploymentmetadata:labels:app:win-webserver-1name:win-webserver-1spec:replicas:3selector:matchLabels:app:win-webserver-1template:metadata:labels:app:win-webserver-1name:win-webserver-1spec:containers:-name:windowswebserverimage:k8s.gcr.io/e2e-test-images/agnhost:2.36command:["/agnhost"]args:["netexec","--http-port","80"]topologySpreadConstraints:-maxSkew:1topologyKey:kubernetes.io/hostnamewhenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:win-webserver-1nodeSelector:kubernetes.io/os:windows
  2. Apply the matching Kubernetes Service and expose it with a NEG:

    apiVersion:v1kind:Servicemetadata:name:win-webserver-1annotations:cloud.google.com/neg:'{"exposed_ports": {"80":{"name": "win-webserver-1"}}}'spec:type:ClusterIPselector:app:win-webserver-1ports:-name:httpprotocol:TCPport:80targetPort:80
  3. Verify the deployment:

    kubectlgetpods

    The output shows that the application has three running Windows Pods.

    NAME                               READY   STATUS    RESTARTS   AGEwin-webserver-1-7bb4c57f6d-hnpgd   1/1     Running   0          5m58swin-webserver-1-7bb4c57f6d-rgqsb   1/1     Running   0          5m58swin-webserver-1-7bb4c57f6d-xp7ww   1/1     Running   0          5m58s
  4. Verify that the Kubernetes Service was created:

    $kubectlgetsvc

    The output resembles the following:

    NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEkubernetes        ClusterIP   10.64.0.1        443/TCP   58mwin-webserver-1   ClusterIP   10.64.6.20        80/TCP    3m35s
  5. Run thedescribe command forkubectl to verify that corresponding NEGswere created for the Kubernetes Service in each of the zones in which theapplication runs:

    $kubectldescribeservicewin-webserver-1

    The output resembles the following:

    Name:              win-webserver-1Namespace:         defaultLabels:Annotations:       cloud.google.com/neg: {"exposed_ports": {"80":{"name": "win-webserver-1"}}}                   cloud.google.com/neg-status: {"network_endpoint_groups":{"80":"win-webserver-1"},"zones":["us-central1-a","us-central1-b","us-central1-c"]}Selector:          app=win-webserver-1Type:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.64.6.20IPs:               10.64.6.20Port:              http  80/TCPTargetPort:        80/TCPEndpoints:         10.60.3.5:80,10.60.4.5:80,10.60.5.5:80Session Affinity:  NoneEvents:  Type    Reason  Age    From            Message  ----    ------  ----   ----            -------  Normal  Create  4m25s  neg-controller  Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-a".  Normal  Create  4m18s  neg-controller  Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-b".  Normal  Create  4m11s  neg-controller  Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-c".  Normal  Attach  4m9s   neg-controller  Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-a")  Normal  Attach  4m8s   neg-controller  Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-c")  Normal  Attach  4m8s   neg-controller  Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-b")

    The output from the preceding command shows you that a NEG was createdfor each zone.

  6. Optional: Use gcloud CLI to verify that the NEGs were created:

    gcloudcomputenetwork-endpoint-groupslist

    The output is as follows:

    NAME                                                        LOCATION            ENDPOINT_TYPE     SIZEwin-webserver-1                                us-central1-a  GCE_VM_IP_PORT  1win-webserver-1                                us-central1-b  GCE_VM_IP_PORT  1win-webserver-1                                us-central1-c  GCE_VM_IP_PORT  1
  7. To deploy the second test application, apply the following YAML file:

    apiVersion:apps/v1kind:Deploymentmetadata:labels:app:win-webserver-2name:win-webserver-2spec:replicas:3selector:matchLabels:app:win-webserver-2template:metadata:labels:app:win-webserver-2name:win-webserver-2spec:containers:-name:windowswebserverimage:k8s.gcr.io/e2e-test-images/agnhost:2.36command:["/agnhost"]args:["netexec","--http-port","80"]topologySpreadConstraints:-maxSkew:1topologyKey:kubernetes.io/hostnamewhenUnsatisfiable:DoNotSchedulelabelSelector:matchLabels:app:win-webserver-2nodeSelector:kubernetes.io/os:windows
  8. Create the corresponding Kubernetes Service:

    apiVersion:v1kind:Servicemetadata:name:win-webserver-2annotations:cloud.google.com/neg:'{"exposed_ports": {"80":{"name": "win-webserver-2"}}}'spec:type:ClusterIPselector:app:win-webserver-2ports:-name:httpprotocol:TCPport:80targetPort:80
  9. Verify the application deployment:

    kubectlgetpods

    Check the output and verify that there are three running Pods.

  10. Verify that the Kubernetes Service and three NEGs were created:

    kubectldescribeservicewin-webserver-2

Configure Cloud Service Mesh

In this section, Cloud Service Mesh is configured as the control planefor the Envoy gateways.

You map the Envoy gateways to the relevant Cloud Service Mesh routingconfiguration by specifying thescope_name parameter. Thescope_nameparameter lets you configure different routing rules for the different Envoygateways.

  1. In Cloud Shell, create a firewall rule that allows incoming trafficfrom the Google services that are checking application responsiveness:

    gcloudcomputefirewall-rulescreateallow-health-checks\--network=default\--direction=INGRESS\--action=ALLOW\--rules=tcp\--source-ranges="35.191.0.0/16,130.211.0.0/22,209.85.152.0/22,209.85.204.0/22"
  2. Check the responsiveness of the first application:

    gcloudcomputehealth-checkscreatehttpwin-app-1-health-check\--enable-logging\--request-path="/healthz"\--use-serving-port
  3. Check the responsiveness of the second application:

    gcloudcomputehealth-checkscreatehttpwin-app-2-health-check\--enable-logging\--request-path="/healthz"\--use-serving-port
  4. Create a Cloud Service Mesh backend service for the firstapplication:

    gcloudcomputebackend-servicescreatewin-app-1-service\--global\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--port-name=http\--health-checkswin-app-1-health-check
  5. Create a Cloud Service Mesh backend service for the secondapplication:

    gcloudcomputebackend-servicescreatewin-app-2-service\--global\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--port-name=http\--health-checkswin-app-2-health-check
  6. Add the NEGs you created previously. These NEGs are associated with thefirst application you created as a backend to theCloud Service Mesh backend service. This code sample adds one NEGfor each zone in the regional cluster you created.

    BACKEND_SERVICE=win-app-1-serviceAPP1_NEG_NAME=win-webserver-1MAX_RATE_PER_ENDPOINT=10gcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-b\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-a\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP1_NEG_NAME\--network-endpoint-group-zoneus-central1-c\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINT
  7. Add additional NEGs. These NEGs are associated with the secondapplication you created as a backend to the Cloud Service Meshbackend service. This code sample adds one NEG for each zone in theregional cluster you created.

    BACKEND_SERVICE=win-app-2-serviceAPP2_NEG_NAME=win-webserver-2gcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-b\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-a\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINTgcloudcomputebackend-servicesadd-backend$BACKEND_SERVICE\--global\--network-endpoint-group$APP2_NEG_NAME\--network-endpoint-group-zoneus-central1-c\--balancing-modeRATE\--max-rate-per-endpoint$MAX_RATE_PER_ENDPOINT

Configure additional Cloud Service Mesh resources

Now that you've configured the Cloud Service Mesh services, you needto configure two additional resources to complete yourCloud Service Mesh setup.

First, these steps show how to configure aGateway resource. AGateway resource is a virtual resource that's used to generateCloud Service Mesh routing rules. Cloud Service Mesh routingrules are used to configure Envoy proxies as gateways.

Next, the steps show how to configure anHTTPRoute resource for each of the backend services. TheHTTPRoute resource maps HTTPrequests to the relevant backend service.

  1. In Cloud Shell, create a YAML file calledgateway.yaml thatdefines theGateway resource:

    cat<<EOF>gateway.yamlname:gateway80scope:gateway-proxyports:-8080type:OPEN_MESHEOF
  2. Create theGateway resource by invoking thegateway.yaml file:

    gcloudnetwork-servicesgatewaysimportgateway80\--source=gateway.yaml\--location=global

    TheGateway name will beprojects/$PROJECT_ID/locations/global/gateways/gateway80.

    You use thisGateway name when you createHTTPRoutes for eachbackend service.

Create theHTTPRoutes for each backend service:

  1. In Cloud Shell, store your Google Cloud project ID in anenvironment variable:

    exportPROJECT_ID=$(gcloudconfiggetproject)
  2. Create theHTTPRoute YAML file for the first application:

    cat<<EOF>win-app-1-route.yamlname:win-app-1-http-routehostnames:-win-app-1gateways:-projects/$PROJECT_ID/locations/global/gateways/gateway80rules:-action:destinations:-serviceName:"projects/$PROJECT_ID/locations/global/backendServices/win-app-1-service"EOF
  3. Create theHTTPRoute resource for the first application:

    gcloudnetwork-serviceshttp-routesimportwin-app-1-http-route\--source=win-app-1-route.yaml\--location=global
  4. Create theHTTPRoute YAML file for the second application:

    cat<<EOF>win-app-2-route.yamlname:win-app-2-http-routehostnames:-win-app-2gateways:-projects/$PROJECT_ID/locations/global/gateways/gateway80rules:-action:destinations:-serviceName:"projects/$PROJECT_ID/locations/global/backendServices/win-app-2-service"EOF
  5. Create theHTTPRoute resource for the second application:

    gcloudnetwork-serviceshttp-routesimportwin-app-2-http-route\--source=win-app-2-route.yaml\--location=global

Deploy and expose the Envoy gateways

After you create the two Windows-based test applications and theCloud Service Mesh, you deploy the Envoy gateways by creatinga deployment YAML file. The deployment YAML file accomplishes the followingtasks:

  • Bootstraps the Envoy gateways.
  • Configures the Envoy gateways to use Cloud Service Mesh as itscontrol plane.
  • Configures the Envoy gateways to useHTTPRoutes for the gateway namedGateway80.

Deploy two replica Envoy gateways. This approach helps to make the gatewaysfault tolerant and provides redundancy. To automatically scale the Envoygateways based on load, you can optionally configure a Horizontal PodAutoscaler. If you decide to configure a Horizontal Pod Autoscaler, you mustfollow the instructions inConfiguring horizontal Pod autoscaling.

  1. In Cloud Shell, create a YAML file:

    apiVersion:apps/v1kind:Deploymentmetadata:creationTimestamp:nulllabels:app:td-envoy-gatewayname:td-envoy-gatewayspec:replicas:2selector:matchLabels:app:td-envoy-gatewaytemplate:metadata:creationTimestamp:nulllabels:app:td-envoy-gatewayspec:containers:-name:envoyimage:envoyproxy/envoy:v1.21.6imagePullPolicy:Alwaysresources:limits:cpu:"2"memory:1Girequests:cpu:100mmemory:128Mienv:-name:ENVOY_UIDvalue:"1337"volumeMounts:-mountPath:/etc/envoyname:envoy-bootstrapinitContainers:-name:td-bootstrap-writerimage:gcr.io/trafficdirector-prod/xds-client-bootstrap-generatorimagePullPolicy:Alwaysargs:---project_number='my_project_number'---scope_name='gateway-proxy'---envoy_port=8080---bootstrap_file_output_path=/var/lib/data/envoy.yaml---traffic_director_url=trafficdirector.googleapis.com:443---expose_stats_port=15005volumeMounts:-mountPath:/var/lib/dataname:envoy-bootstrapvolumes:-name:envoy-bootstrapemptyDir:{}
    • Replacemy_project_number with your project number.

      • You can find your project number by running the following command:
      gcloudprojectsdescribe$(gcloudconfiggetproject)--format="value(projectNumber)"

    Port15005 is used to expose the Envoy Admin endpoint named/stats. It'salso used for the following purposes:

    • As a responsiveness endpoint from the internal Application Load Balancer.
    • As a way to consume Google Cloud Managed Service for Prometheus metrics from Envoy.

    When the two Envoy Gateway Pods are running, create a service of typeClusterIP to expose them. You must also create a YAML file calledBackendConfig.BackendConfig defines a non-standard responsiveness check. That check isused to verify the responsiveness of the Envoy gateways.

  2. To create the backend configuration with a non-standard responsivenesscheck, create a YAML file calledenvoy-backendconfig:

    apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:envoy-backendconfigspec:healthCheck:checkIntervalSec:5timeoutSec:5healthyThreshold:2unhealthyThreshold:3type:HTTPrequestPath:/statsport:15005

    The responsiveness check will use the/stats endpoint on port15005 tocontinuously check the responsiveness of the Envoy gateways.

  3. Create the Envoy gateways service:

    apiVersion:v1kind:Servicemetadata:name:td-envoy-gatewayannotations:cloud.google.com/backend-config:'{"default": "envoy-backendconfig"}'spec:type:ClusterIPselector:app:td-envoy-gatewayports:-name:httpprotocol:TCPport:8080targetPort:8080-name:statsprotocol:TCPport:15005targetPort:15005
  4. View the Envoy gateways service you created:

    kubectlgetsvctd-envoy-gateway

Create the Kubernetes Gateway resource

Creating the Kubernetes Gateway resource provisions the internal Application Load Balancer toexpose the Envoy gateways.

Before creating that resource, you must create two sample self-signedcertificates and then import them into the GKE cluster as KubernetesSecrets. The certificates enable the following gateway architecture:

  • Each application is served over HTTPS.
  • Each application uses a dedicated certificate.

When using self-managed certificates, the internal Application Load Balancer can use up to themaximum limit of certificates to expose applications with different fully qualified domain names.

To create the certificates useopenssl.

  1. In Cloud Shell, generate a configuration file for the firstcertificate:

    cat<<EOF>CONFIG_FILE[req]default_bits=2048req_extensions=extension_requirementsdistinguished_name=dn_requirementsprompt=no[extension_requirements]basicConstraints=CA:FALSEkeyUsage=nonRepudiation,digitalSignature,keyEnciphermentsubjectAltName=@sans_list[dn_requirements]0.organizationName=examplecommonName=win-webserver-1.example.com[sans_list]DNS.1=win-webserver-1.example.comEOF
  2. Generate a private key for the first certificate:

    opensslgenrsa-outsample_private_key2048
  3. Generate a certificate request:

    opensslreq-new-keysample_private_key-outCSR_FILE-configCONFIG_FILE
  4. Sign and generate the first certificate:

    opensslx509-req-signkeysample_private_key-inCSR_FILE-outsample.crt-extfileCONFIG_FILE-extensionsextension_requirements-days90
  5. Generate a configuration file for the second certificate:

    cat<<EOF>CONFIG_FILE2[req]default_bits=2048req_extensions=extension_requirementsdistinguished_name=dn_requirementsprompt=no[extension_requirements]basicConstraints=CA:FALSEkeyUsage=nonRepudiation,digitalSignature,keyEnciphermentsubjectAltName=@sans_list[dn_requirements]0.organizationName=examplecommonName=win-webserver-2.example.com[sans_list]DNS.1=win-webserver-2.example.comEOF
  6. Generate a private key for the second certificate:

    opensslgenrsa-outsample_private_key22048
  7. Generate a certificate request:

    opensslreq-new-keysample_private_key2-outCSR_FILE2-configCONFIG_FILE2
  8. Sign and generate the second certificate:

    opensslx509-req-signkeysample_private_key2-inCSR_FILE2-outsample2.crt-extfileCONFIG_FILE2-extensionsextension_requirements-days90

Import certificates as Kubernetes Secrets

In this section, you accomplish the following tasks:

  • Import the self-signed certificates into the GKE cluster asKubernetes Secrets.
  • Create a static IP address for an internal VPC.
  • Create the Kubernetes Gateway API resource.
  • Verify that the certificates work.
  1. In Cloud Shell, import the first certificate as a KubernetesSecret:

    kubectlcreatesecrettlssample-cert--certsample.crt--keysample_private_key
  2. Import the second certificate as a Kubernetes Secret:

    kubectlcreatesecrettlssample-cert-2--certsample2.crt--keysample_private_key2
  3. To enable internal Application Load Balancer, create a static IP address on the internalVPC:

    gcloudcomputeaddressescreatesample-ingress-ip--regionus-central1--subnetdefault
  4. Create the Kubernetes Gateway API resource YAML file:

    kind:GatewayapiVersion:gateway.networking.k8s.io/v1beta1metadata:name:internal-httpsspec:gatewayClassName:gke-l7-rilbaddresses:-type:NamedAddressvalue:sample-ingress-iplisteners:-name:httpsprotocol:HTTPSport:443tls:mode:TerminatecertificateRefs:-name:sample-cert-name:sample-cert-2

    By default, a Kubernetes Gateway has no default routes. The gatewayreturns a page not found (404) error when requests are sent to it.

  5. Configure a defaultroute YAML file for the Kubernetes Gateway that passes all incoming requests to the Envoy gateways:

    kind:HTTPRouteapiVersion:gateway.networking.k8s.io/v1beta1metadata:name:envoy-default-backendspec:parentRefs:-kind:Gatewayname:internal-httpsrules:-backendRefs:-name:td-envoy-gatewayport:8080

    Verify the full flow by sending HTTP requests to both applications. Toverify that the Envoy gateways route traffic to the correct applicationPods, inspect the HTTP Host header.

  6. Find and store the Kubernetes Gateway IP address in an environmentvariable:

    exportEXTERNAL_IP=$(kubectlgetgatewayinternal-https-ojson|jq.status.addresses[0].value-r)
  7. Send a request to the first application:

    curl--insecure-H"Host: win-app-1"https://$EXTERNAL_IP/hostName
  8. Send a request to the second application:

    curl--insecure-H"Host: win-app-2"https://$EXTERNAL_IP/hostName
  9. Verify that the hostname returned from the request matches the Podsrunningwin-app-1 andwin-app-2:

    kubectlgetpods

    The output should displaywin-app-1 andwin-app-2.

Monitor Envoy gateways

Monitor your Envoy gateways withGoogle Cloud Managed Service for Prometheus.

Google Cloud Managed Service for Prometheus should be enabled by default on the cluster that you created earlier.

  1. In Cloud Shell, create aPodMonitoring resource by applyingthe following YAML file:

    apiVersion:monitoring.googleapis.com/v1kind:PodMonitoringmetadata:name:prom-envoyspec:selector:matchLabels:app:td-envoy-gatewayendpoints:-port:15005interval:30spath:/stats/prometheus

    After applying the YAML file, the system begins to collectGoogle Cloud Managed Service for Prometheus metrics in a dashboard.

  2. To create the Google Cloud Managed Service for Prometheus metrics dashboard, followthese instructions:

    1. Sign in to the Google Cloud console.
    2. Open the menu.
    3. ClickOperations > Monitoring > Dashboards.
  3. To import the dashboard, follow these instructions:

    1. On the Dashboards screen, clickSample Library.
    2. Enter envoy in the filter box.
    3. ClickIstio Envoy Prometheus Overview.
    4. Select the checkbox.
    5. ClickImport and then clickConfirm to import the dashboard.
  4. To view the dashboard, follow these instructions:

    1. ClickDashboard List.
    2. SelectIntegrations.
    3. ClickIstio Envoy Prometheus Overview to view the dashboard.

You can now see the most important metrics of your Envoy gateways. You can alsoconfigure alerts based on your criteria. Before you clean up, send a few moretest requests to the applications and see how the dashboard updates with thelatest metrics.

Clean up

To avoid incurring charges to your Google Cloud account for the resourcesused in this deployment, either delete the project that containsthe resources, or keep the project and delete the individual resources.

Delete the project

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

What's next

Contributors

Author:Eitan Eibschutz | Staff Technical Solutions Consultant

Other contributors:

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-08-14 UTC.