Set up service security with proxyless gRPC (legacy)

This guide shows you how to configure a security service for proxylessgRPC service mesh.

Note: This guide only supports Cloud Service Mesh with Google Cloud APIs anddoes not support Istio APIs. For more information see,Cloud Service Mesh overview.

This document applies to Cloud Service Mesh with the load balancing APIsonly. This is a legacy document.

Requirements

Before you configure service security for the gRPC proxyless service mesh,make sure that you meet the following requirements.

Configure Identity and Access Management

You must havethe required permissions to use Google Kubernetes Engine.At a minimum, you must have the following roles:

  • roles/container.clusterAdmin GKE role
  • roles/compute.instanceAdmin Compute Engine role
  • roles/iam.serviceAccountUser role

To create the resources required for the setup, you must have thecompute.NetworkAdmin role. This role contains all the necessary permissions tocreate, update, delete, list, and use (that is, referencing this in otherresources) the required resources. If you are the owner-editor of your project,you automatically have this role.

Note that thenetworksecurity.googleapis.com.clientTlsPolicies.use andnetworksecurity.googleapis.com.serverTlsPolicies.use are not enforced when youreference these resources in the backend service and target HTTPS proxyresources.

If this is enforced in the future and you are using thecompute.NetworkAdminrole, then you won't notice any issues when this check is enforced.

If you are using custom roles and this check is enforced in the future, you mustmake sure to include the respective.use permission. Otherwise, in the future,you might find that your custom role does not have the necessary permissionsto refer toclientTlsPolicy orserverTlsPolicy from the backend service ortarget HTTPS proxy, respectively.

Prepare for setup

Proxyless service mesh (PSM) security adds security to a service mesh that isset up for load balancing per theproxyless gRPC servicesdocumentation. In a proxylessservice mesh, a gRPC client uses the schemexds: in the URI to access theservice, which enables the PSM load balancing and endpoint discovery features.

Update gRPC clients and servers to the correct version

Build or rebuild your applications using theminimum supported gRPC version foryour language.

Update the bootstrap file

gRPC applications use a single bootstrap file, which must have all of thefields that are required by gRPC client- and server-side code. A bootstrapgenerator automatically generates the bootstrap file to include flags and valuesthat PSM security needs. For more information, see theBootstrapfile section, which includes a sample bootstrap file.

Setup overview

This setup process is an extension of theCloud Service Mesh setupwith GKE and proxyless gRPC services. Existing unmodified stepsof that setup procedure are referenced wherever they apply.

The main enhancements to theCloud Service Mesh setupwith GKE are as follows:

  1. Setting up CA Service, in which you create private CA poolsand the required certificate authorities.
  2. Creating a GKE cluster with GKEWorkload Identity Federation for GKE and mesh certificates features and CA Serviceintegration.
  3. Configuring mesh certificate issuance on the cluster.
  4. Creating the client and server service accounts.
  5. Setting up theexample server that uses xDS APIs and xDS server credentials to acquire securityconfiguration from Cloud Service Mesh.
  6. Setting up theexample client that uses xDS credentials.
  7. Updating the Cloud Service Mesh configuration to include security configuration.

You can see code examples for using xDS credentials at thefollowing locations:

Update the Google Cloud CLI

To update the Google Cloud CLI, run the following command:

gcloud components update

Set up environment variables

In this guide, you use Cloud Shell commands, and repeating information inthe commands is represented by various environment variables. Set your specificvalues to the following environment variables in the shell environment beforeyou execute the commands. Each comment line indicates the meaning of theassociated environment variable.

# Your project IDPROJECT_ID=YOUR_PROJECT_ID# GKE cluster name and zone for this example.CLUSTER_NAME="secure-psm-cluster"ZONE="us-east1-d"# GKE cluster URL derived from the aboveGKE_CLUSTER_URL="https://container.googleapis.com/v1/projects/${PROJECT_ID}/locations/${ZONE}/clusters/${CLUSTER_NAME}"# Workload pool to be used with the GKE clusterWORKLOAD_POOL="${PROJECT_ID}.svc.id.goog"# Kubernetes namespace to run client and server demo.K8S_NAMESPACE='default'DEMO_BACKEND_SERVICE_NAME='grpc-gke-helloworld-service'# Compute other values# Project number for your projectPROJNUM=$(gcloud projects describe ${PROJECT_ID} --format="value(projectNumber)")# VERSION is the GKE cluster version. Install and use the most recent version# from the rapid release channel and substitute its version for# CLUSTER_VERSION, for example:# VERSION=latest available version# Note that the minimum required cluster version is 1.21.4-gke.1801.VERSION="CLUSTER_VERSION"SA_GKE=service-${PROJNUM}@container-engine-robot.iam.gserviceaccount.com

Enable access to required APIs

This section tells you how to enable access to the necessary APIs.

  1. Run the following command to enable the Cloud Service Mesh and other APIsrequired for proxyless gRPC service mesh security.

    gcloud services enable \    container.googleapis.com \    cloudresourcemanager.googleapis.com \    compute.googleapis.com \    trafficdirector.googleapis.com \    networkservices.googleapis.com \    networksecurity.googleapis.com \    privateca.googleapis.com \    gkehub.googleapis.com
  2. Run the following command to allow the default service account to access theCloud Service Mesh security API.

    GSA_EMAIL=$(gcloud iam service-accounts list --format='value(email)' \    --filter='displayName:Compute Engine default service account')gcloud projects add-iam-policy-binding ${PROJECT_ID} \  --member serviceAccount:${GSA_EMAIL} \   --role roles/trafficdirector.client

Create or update a GKE cluster

Cloud Service Mesh service security depends on the CA Serviceintegration with GKE. The GKE cluster mustmeet the following requirements in addition tothe requirements for setup:

  • Use a minimum cluster version of 1.21.4-gke.1801. If you need features thatare in a later version, you can obtain that version from the rapid releasechannel.
  • The GKE cluster must be enabled and configured with meshcertificates, as described inCreating certificate authorities to issue certificates.
  1. Create a new cluster that uses Workload Identity Federation for GKE. If you are updating anexisting cluster, skip to the next step. The value you give for--tagsmust match the name passed to the--target-tags flag for thefirewall-rules create command in the sectionConfiguring Cloud Service Mesh with Cloud Load Balancing components.

    # Create a GKE cluster with GKE managed mesh certificates.gcloud container clusters createCLUSTER_NAME \  --release-channel=rapid \  --scopes=cloud-platform \  --image-type=cos_containerd \  --machine-type=e2-standard-2 \  --zone=ZONE \  --workload-pool=PROJECT_ID.svc.id.goog \  --enable-mesh-certificates \  --cluster-version=CLUSTER_VERSION \  --enable-ip-alias \  --tags=allow-health-checks \  --workload-metadata=GKE_METADATA

    Cluster creation might take several minutes to complete.

  2. If you are using an existing cluster, turn on Workload Identity Federation for GKE andGKE mesh certificates. Make sure that the cluster was createdwith the--enable-ip-alias flag, which cannot be used with theupdatecommand.

    gcloud container clusters updateCLUSTER_NAME \  --enable-mesh-certificates
  3. Run the following command to switch to the new cluster as the default clusterfor yourkubectl commands:

    gcloud container clusters get-credentialsCLUSTER_NAME \  --zoneZONE

Register clusters with a fleet

Register the cluster that you created or updated inCreating a GKE cluster with a fleet.Registering the cluster makes it easier for you to configure clusters acrossmultiple projects.

Note that these steps can take up to ten minutes each to complete.

  1. Register your cluster with the fleet:

    gcloud container fleet memberships registerCLUSTER_NAME \  --gke-cluster=ZONE/CLUSTER_NAME \  --enable-workload-identity --install-connect-agent \  --manifest-output-file=MANIFEST-FILE_NAME

    Replace the variables as follows:

    • CLUSTER_NAME: Your cluster's name.
    • ZONE: Your cluster's zone.
    • MANIFEST-FILE_NAME: The path where these commandsgenerate the manifest for registration.

    When the registration process succeeds, you see a message such as thefollowing:

    Finished registering the clusterCLUSTER_NAME with the fleet.
  2. Apply the generated manifest file to your cluster:

    kubectl apply -fMANIFEST-FILE_NAME

    When the application process succeeds, you see messages such as thefollowing:

    namespace/gke-connect createdserviceaccount/connect-agent-sa createdpodsecuritypolicy.policy/gkeconnect-psp createdrole.rbac.authorization.k8s.io/gkeconnect-psp:role createdrolebinding.rbac.authorization.k8s.io/gkeconnect-psp:rolebinding createdrole.rbac.authorization.k8s.io/agent-updater createdrolebinding.rbac.authorization.k8s.io/agent-updater createdrole.rbac.authorization.k8s.io/gke-connect-agent-20210416-01-00 createdclusterrole.rbac.authorization.k8s.io/gke-connect-impersonation-20210416-01-00 createdclusterrolebinding.rbac.authorization.k8s.io/gke-connect-impersonation-20210416-01-00 createdclusterrolebinding.rbac.authorization.k8s.io/gke-connect-feature-authorizer-20210416-01-00 createdrolebinding.rbac.authorization.k8s.io/gke-connect-agent-20210416-01-00 createdrole.rbac.authorization.k8s.io/gke-connect-namespace-getter createdrolebinding.rbac.authorization.k8s.io/gke-connect-namespace-getter createdsecret/http-proxy createddeployment.apps/gke-connect-agent-20210416-01-00 createdservice/gke-connect-monitoring createdsecret/creds-gcp create
  3. Get the membership resource from the cluster:

    kubectl get memberships membership -o yaml

    The output should include the Workoad Identity pool assigned by the fleet,wherePROJECT_ID is your project ID:

    workload_identity_pool:PROJECT_ID.svc.id.goog

    This means that the cluster registered successfully.

Create certificate authorities to issue certificates

To issue certificates to your Pods, create a CA Service pooland the following certificate authorities (CAs):

  • Root CA. This is the root of trust for all issued mesh certificates. Youcan use an existing root CA if you have one. Create the root CA in theenterprise tier, which is meant for long-lived, low-volume certificateissuance.
  • Subordinate CA. This CA issues certificates for workloads. Create thesubordinate CA in the region where your cluster is deployed. Create thesubordinate CA in thedevops tier, which is meant for short-lived,high-volume certificate issuance.

Creating a subordinate CA is optional, but we strongly recommend creating onerather than using your root CA to issue GKE mesh certificates. Ifyou decide to use the root CA to issue mesh certificates, ensure that thedefaultconfig-based issuance moderemains permitted.

The subordinate CA can be in a different region from your cluster, but westrongly recommend creating it in the same region as your cluster to optimizeperformance. You can, however, create the root and subordinate CAs in differentregions without any impact to performance or availability.

These regions are supported for CA Service:

Region nameRegion description
asia-east1Taiwan
asia-east2Hong Kong
asia-northeast1Tokyo
asia-northeast2Osaka
asia-northeast3Seoul
asia-south1Mumbai
asia-south2Delhi
asia-southeast1Singapore
asia-southeast2Jakarta
australia-southeast1Sydney
australia-southeast2Melbourne
europe-central2Warsaw
europe-north1Finland
europe-southwest1Madrid
europe-west1Belgium
europe-west2London
europe-west3Frankfurt
europe-west4Netherlands
europe-west6Zürich
europe-west8Milan
europe-west9Paris
europe-west10Berlin
europe-west12Turin
me-central1Doha
me-central2Dammam
me-west1Tel Aviv
northamerica-northeast1Montréal
northamerica-northeast2Toronto
southamerica-east1São Paulo
southamerica-west1Santiago
us-central1Iowa
us-east1South Carolina
us-east4Northern Virginia
us-east5Columbus
us-south1Dallas
us-west1Oregon
us-west2Los Angeles
us-west3Salt Lake City
us-west4Las Vegas

The list of supported locations can also be checked by running the followingcommand:

gcloud privateca locations list
  1. Grant the IAMroles/privateca.caManager to individualswho create a CA pool and a CA. Note that forMEMBER, thecorrect format isuser:userid@example.com. If that person is the currentuser, you can obtain the current user ID with the shell command$(gcloud auth list --filter=status:ACTIVE --format="value(account)").

    gcloud projects add-iam-policy-bindingPROJECT_ID \  --member=MEMBER \  --role=roles/privateca.caManager
  2. Grant the rolerole/privateca.admin for CA Service to individualswho need to modify IAM policies, whereMEMBER is anindividual who needs this access, specifically, any individuals who performthe steps that follow that grant theprivateca.auditor andprivateca.certificateManager roles:

    gcloud projects add-iam-policy-bindingPROJECT_ID \  --member=MEMBER \  --role=roles/privateca.admin
  3. Create the root CA Service pool.

    gcloud privateca pools createROOT_CA_POOL_NAME \  --locationROOT_CA_POOL_LOCATION \  --tier enterprise
  4. Create a root CA.

    gcloud privateca roots createROOT_CA_NAME --poolROOT_CA_POOL_NAME \  --subject "CN=ROOT_CA_NAME, O=ROOT_CA_ORGANIZATION" \  --key-algorithm="ec-p256-sha256" \  --max-chain-length=1 \  --locationROOT_CA_POOL_LOCATION

    For this demonstration setup, use the following values for the variables:

    • ROOT_CA_POOL_NAME=td_sec_pool
    • ROOT_CA_NAME=pkcs2-ca
    • ROOT_CA_POOL_LOCATION=us-east1
    • ROOT_CA_ORGANIZATION="TestCorpLLC"
  5. Create the subordinate pool and subordinate CA. Ensure that the defaultconfig-based issuance moderemains permitted.

    gcloud privateca pools createSUBORDINATE_CA_POOL_NAME \  --locationSUBORDINATE_CA_POOL_LOCATION \  --tier devops
    gcloud privateca subordinates createSUBORDINATE_CA_NAME \  --poolSUBORDINATE_CA_POOL_NAME \  --locationSUBORDINATE_CA_POOL_LOCATION \  --issuer-poolROOT_CA_POOL_NAME \  --issuer-locationROOT_CA_POOL_LOCATION \  --subject "CN=SUBORDINATE_CA_NAME, O=SUBORDINATE_CA_ORGANIZATION" \  --key-algorithm "ec-p256-sha256" \  --use-preset-profile subordinate_mtls_pathlen_0

    For this demonstration setup, use the following values for the variables:

    • SUBORDINATE_CA_POOL_NAME="td-ca-pool"
    • SUBORDINATE_CA_POOL_LOCATION=us-east1
    • SUBORDINATE_CA_NAME="td-ca"
    • SUBORDINATE_CA_ORGANIZATION="TestCorpLLC"
    • ROOT_CA_POOL_NAME=td_sec_pool
    • ROOT_CA_POOL_LOCATION=us-east1
  6. Grant the IAMprivateca.auditor role for the root CA pool toallow access from the GKE service account:

    gcloud privateca pools add-iam-policy-bindingROOT_CA_POOL_NAME \ --locationROOT_CA_POOL_LOCATION \ --role roles/privateca.auditor \ --member="serviceAccount:service-PROJNUM@container-engine-robot.iam.gserviceaccount.com"
  7. Grant the IAMprivateca.certificateManagerrole for the subordinate CA pool to allow access from the GKEservice account:

    gcloud privateca pools add-iam-policy-bindingSUBORDINATE_CA_POOL_NAME \  --locationSUBORDINATE_CA_POOL_LOCATION \  --role roles/privateca.certificateManager \  --member="serviceAccount:service-PROJNUM@container-engine-robot.iam.gserviceaccount.com"
  8. Save the followingWorkloadCertificateConfig YAML configuration to tellyour cluster how to issue mesh certificates:

    apiVersion:security.cloud.google.com/v1kind:WorkloadCertificateConfigmetadata:name:defaultspec:# Required. The CA service that issues your certificates.certificateAuthorityConfig:certificateAuthorityServiceConfig:endpointURI:ISSUING_CA_POOL_URI# Required. The key algorithm to use. Choice of RSA or ECDSA.## To maximize compatibility with various TLS stacks, your workloads# should use keys of the same family as your root and subordinate CAs.## To use RSA, specify configuration such as:#   keyAlgorithm:#     rsa:#       modulusSize: 4096## Currently, the only supported ECDSA curves are "P256" and "P384", and the only# supported RSA modulus sizes are 2048, 3072 and 4096.keyAlgorithm:rsa:modulusSize:4096# Optional. Validity duration of issued certificates, in seconds.## Defaults to 86400 (1 day) if not specified.validityDurationSeconds:86400# Optional. Try to start rotating the certificate once this# percentage of validityDurationSeconds is remaining.## Defaults to 50 if not specified.rotationWindowPercentage:50

    Replace the following:

    • The project ID of the project in which your cluster runs:
      PROJECT_ID
    • The fully qualified URI of the CA that issues your mesh certificates (ISSUING_CA_POOL_URI).This can be either your subordinate CA (recommended) or your root CA. The format is:
      //privateca.googleapis.com/projects/PROJECT_ID/locations/SUBORDINATE_CA_POOL_LOCATION/caPools/SUBORDINATE_CA_POOL_NAME
  9. Save the followingTrustConfig YAML configuration to tell your cluster howto trust the issued certificates:

    apiVersion:security.cloud.google.com/v1kind:TrustConfigmetadata:name:defaultspec:# You must include a trustStores entry for the trust domain that# your cluster is enrolled in.trustStores:-trustDomain:PROJECT_ID.svc.id.goog# Trust identities in this trustDomain if they appear in a certificate# that chains up to this root CA.trustAnchors:-certificateAuthorityServiceURI:ROOT_CA_POOL_URI

    Replace the following:

    • The project ID of the project in which your cluster runs:
      PROJECT_ID
    • The fully qualified URI of the root CA pool (ROOT_CA_POOL_URI).The format is:
      //privateca.googleapis.com/projects/PROJECT_ID/locations/ROOT_CA_POOL_LOCATION/caPools/ROOT_CA_POOL_NAME
  10. Apply the configurations to your cluster:

    kubectlapply-fWorkloadCertificateConfig.yamlkubectlapply-fTrustConfig.yaml

Create a proxyless gRPC service with NEGs

For PSM security, you need a proxyless gRPC server capable of using xDS toacquire security configuration from Cloud Service Mesh. This step issimilar toConfiguring GKE services with NEGs in the PSM load balancing setupguide, but you use the xDS-enabledhelloworld server in thexDS example in thegrpc-javarepositoryinstead of thejava-example-hostname image.

You build and run this server in a container built from anopenjdk:8-jdk image.You also use thenamed NEG feature, which lets you specify a name for the NEG. Thissimplifies later steps because your deployment knows the name of the NEG withouthaving to look it up.

The following is a complete example of the gRPC server Kubernetes spec.Note the following:

  • The spec creates a Kubernetes service accountexample-grpc-server that isused by the gRPC server Pod.
  • The spec uses thename field in thecloud.google.com/neg annotation of theservice to specify the NEG nameexample-grpc-server.
  • The variable${PROJNUM} represents the project number of your project.
  • The spec uses theinitContainers section to run a bootstrap generator topopulate the bootstrap file that the proxyless gRPC library needs. Thisbootstrap file resides at/tmp/grpc-xds/td-grpc-bootstrap.json in the gRPCserver container calledexample-grpc-server.

Add the following annotation to your Pod spec:

 annotations:   security.cloud.google.com/use-workload-certificates: ""

You can see the correct placement in the full spec that follows.

On creation, each Pod gets a volume at/var/run/secrets/workload-spiffe-credentials.This volume contains the following:

  • private_key.pem is an automatically generated private key.
  • certificates.pem is a bundle of PEM-formatted certificates that can bepresented to another Pod as the client certificate chain, or used as a servercertificate chain.
  • ca_certificates.pem is a bundle of PEM-formatted certificates to use as trustanchors when validating the client certificate chain presented by another Pod,or the server certificate chain received when connecting to another Pod.

Note thatca_certificates.pem contains certificates for the local trust domainfor the workloads, which is the cluster's workload pool.

The leaf certificate incertificates.pem contains the following plain-textSPIFFE identity assertion:

spiffe://WORKLOAD_POOL/ns/NAMESPACE/sa/KUBERNETES_SERVICE_ACCOUNT

In this assertion:

  • WORKLOAD_POOL is the name of the cluster workload pool.
  • NAMESPACE is the namespace of your Kubernetes service account.
  • KUBERNETES_SERVICE_ACCOUNT is the name of your Kubernetes service account.

The following instructions for your language create the spec to use in thisexample.

Java

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the spec:

    cat << EOF > example-grpc-server.yamlapiVersion: v1kind: ServiceAccountmetadata: name: example-grpc-server namespace: default annotations:   iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: v1kind: Servicemetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-server annotations:   cloud.google.com/neg: '{"exposed_ports":{"8080":{"name": "example-grpc-server"}}}'spec: ports: - name: helloworld   port: 8080   protocol: TCP   targetPort: 50051 selector:   k8s-app: example-grpc-server type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-serverspec: replicas: 1 selector:   matchLabels:     k8s-app: example-grpc-server strategy: {} template:   metadata:     annotations:        security.cloud.google.com/use-workload-certificates: ""     labels:       k8s-app: example-grpc-server   spec:     containers:     - image: openjdk:8-jdk       imagePullPolicy: IfNotPresent       name: example-grpc-server       command:       - /bin/sleep       - inf       env:       - name: GRPC_XDS_BOOTSTRAP         value: "/tmp/grpc-xds/td-grpc-bootstrap.json"       ports:       - protocol: TCP         containerPort: 50051       resources:         limits:           cpu: 800m           memory: 512Mi         requests:           cpu: 100m           memory: 512Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/grpc-xds/     initContainers:     - name: grpc-td-init       image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0       imagePullPolicy: Always       args:       - --output       - "/tmp/bootstrap/td-grpc-bootstrap.json"       - --node-metadata=app=helloworld       resources:         limits:           cpu: 100m           memory: 100Mi         requests:           cpu: 10m           memory: 100Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/bootstrap/     serviceAccountName: example-grpc-server     volumes:     - name: grpc-td-conf       emptyDir:         medium: MemoryEOF

C++

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the spec:

    cat << EOF > example-grpc-server.yamlapiVersion: v1kind: ServiceAccountmetadata: name: example-grpc-server namespace: default annotations:   iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: v1kind: Servicemetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-server annotations:   cloud.google.com/neg: '{"exposed_ports":{"8080":{"name": "example-grpc-server"}}}'spec: ports: - name: helloworld   port: 8080   protocol: TCP   targetPort: 50051 selector:   k8s-app: example-grpc-server type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-serverspec: replicas: 1 selector:   matchLabels:     k8s-app: example-grpc-server strategy: {} template:   metadata:     annotations:        security.cloud.google.com/use-workload-certificates: ""     labels:       k8s-app: example-grpc-server   spec:     containers:     - image: phusion/baseimage:18.04-1.0.0       imagePullPolicy: IfNotPresent       name: example-grpc-server       command:       - /bin/sleep       - inf       env:       - name: GRPC_XDS_BOOTSTRAP         value: "/tmp/grpc-xds/td-grpc-bootstrap.json"       ports:       - protocol: TCP         containerPort: 50051       resources:         limits:           cpu: 8           memory: 8Gi         requests:           cpu: 300m           memory: 512Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/grpc-xds/     initContainers:     - name: grpc-td-init       image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0       imagePullPolicy: Always       args:       - --output       - "/tmp/bootstrap/td-grpc-bootstrap.json"       - --node-metadata=app=helloworld       resources:         limits:           cpu: 100m           memory: 100Mi         requests:           cpu: 10m           memory: 100Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/bootstrap/     serviceAccountName: example-grpc-server     volumes:     - name: grpc-td-conf       emptyDir:         medium: MemoryEOF

Python

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the spec:

    cat << EOF > example-grpc-server.yamlapiVersion: v1kind: ServiceAccountmetadata: name: example-grpc-server namespace: default annotations:   iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: v1kind: Servicemetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-server annotations:   cloud.google.com/neg: '{"exposed_ports":{"8080":{"name": "example-grpc-server"}}}'spec: ports: - name: helloworld   port: 8080   protocol: TCP   targetPort: 50051 selector:   k8s-app: example-grpc-server type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-serverspec: replicas: 1 selector:   matchLabels:     k8s-app: example-grpc-server strategy: {} template:   metadata:     annotations:        security.cloud.google.com/use-workload-certificates: ""     labels:       k8s-app: example-grpc-server   spec:     containers:     - image: phusion/baseimage:18.04-1.0.0       imagePullPolicy: IfNotPresent       name: example-grpc-server       command:       - /bin/sleep       - inf       env:       - name: GRPC_XDS_BOOTSTRAP         value: "/tmp/grpc-xds/td-grpc-bootstrap.json"       ports:       - protocol: TCP         containerPort: 50051       resources:         limits:           cpu: 8           memory: 8Gi         requests:           cpu: 300m           memory: 512Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/grpc-xds/     initContainers:     - name: grpc-td-init       image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0       imagePullPolicy: Always       args:       - --output       - "/tmp/bootstrap/td-grpc-bootstrap.json"       - --node-metadata=app=helloworld       resources:         limits:           cpu: 100m           memory: 100Mi         requests:           cpu: 10m           memory: 100Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/bootstrap/     serviceAccountName: example-grpc-server     volumes:     - name: grpc-td-conf       emptyDir:         medium: MemoryEOF

Go

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the spec:

    cat << EOF > example-grpc-server.yamlapiVersion: v1kind: ServiceAccountmetadata: name: example-grpc-server namespace: default annotations:   iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: v1kind: Servicemetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-server annotations:   cloud.google.com/neg: '{"exposed_ports":{"8080":{"name": "example-grpc-server"}}}'spec: ports: - name: helloworld   port: 8080   protocol: TCP   targetPort: 50051 selector:   k8s-app: example-grpc-server type: ClusterIP---apiVersion: apps/v1kind: Deploymentmetadata: name: example-grpc-server namespace: default labels:   k8s-app: example-grpc-serverspec: replicas: 1 selector:   matchLabels:     k8s-app: example-grpc-server strategy: {} template:   metadata:     annotations:        security.cloud.google.com/use-workload-certificates: ""     labels:       k8s-app: example-grpc-server   spec:     containers:     - image: golang:1.16-alpine       imagePullPolicy: IfNotPresent       name: example-grpc-server       command:       - /bin/sleep       - inf       env:       - name: GRPC_XDS_BOOTSTRAP         value: "/tmp/grpc-xds/td-grpc-bootstrap.json"       ports:       - protocol: TCP         containerPort: 50051       resources:         limits:           cpu: 8           memory: 8Gi         requests:           cpu: 300m           memory: 512Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/grpc-xds/     initContainers:     - name: grpc-td-init       image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0       imagePullPolicy: Always       args:       - --output       - "/tmp/bootstrap/td-grpc-bootstrap.json"       - --node-metadata=app=helloworld       resources:         limits:           cpu: 100m           memory: 100Mi         requests:           cpu: 10m           memory: 100Mi       volumeMounts:       - name: grpc-td-conf         mountPath: /tmp/bootstrap/     serviceAccountName: example-grpc-server     volumes:     - name: grpc-td-conf       emptyDir:         medium: MemoryEOF

    Complete the process as follows.

  1. Apply the spec:

    kubectl apply -f example-grpc-server.yaml
  2. Grant the required roles to the service account:

    gcloud iam service-accounts add-iam-policy-binding \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/example-grpc-server]" \  ${PROJNUM}-compute@developer.gserviceaccount.comgcloud projects add-iam-policy-binding ${PROJECT_ID} \  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/example-grpc-server]" \  --role roles/trafficdirector.client
  3. Run these commands to verify that the service and Pod are created correctly:

    kubectl get deploy/example-grpc-serverkubectl get svc/example-grpc-server
  4. Verify that the NEG name is correct:

    gcloud compute network-endpoint-groups list \    --filter "name=example-grpc-server" --format "value(name)"

    The previous command is expected to return the NEG nameexample-grpc-server.

Configure Cloud Service Mesh with Google Cloud load balancing components

The steps in this section are similar to those inConfiguring Cloud Service Mesh with load balancing components,but there are some changes, as described in the following sections.

Create the health check, firewall rule, and backend service

When the gRPC server is configured to use mTLS, gRPC health checks don't workbecause the health checking client cannot present a valid client certificateto the servers. You can address this in one of two ways.

In the first approach, you have the server create an additional serving portthat is designated as the health checking port. This is attached to a specialhealth check service,as plain text or TLS to that port.

The xDShelloworld example serverusesPORT_NUMBER + 1 as the plain text health checking port. The example uses50052 as the health checking port because 50051 is the gRPC application serverport.

In the second approach, you configure health checking to check only TCPconnectivity to the application serving port. This checks only connectivity, andit also generates unnecessary traffic to the server when there are unsuccessfulTLS handshakes. For this reason, we recommend that you use the first approach.

  1. Create the health check. Note that health checking does not start untilyoucreate and start the server.

    • If you are creating a designated serving port for health checking, whichis the approach we recommend, use this command:

      gcloud compute health-checks create grpc grpc-gke-helloworld-hc \ --enable-logging --port 50052
    • If you are creating a TCP health check, which we don't recommend, usethis command:

      gcloud compute health-checks create tcp grpc-gke-helloworld-hc \--use-serving-port
  2. Create the firewall. Ensure that the value of--target-tags matches thevalue you provided for--tags in the sectionCreate or update a GKE cluster.

    gcloud compute firewall-rules create grpc-gke-allow-health-checks \  --network default --action allow --direction INGRESS \  --source-ranges 35.191.0.0/16,130.211.0.0/22 \  --target-tags allow-health-checks \  --rules tcp:50051-50052
  3. Create the backend service:

    gcloud compute backend-services create grpc-gke-helloworld-service \   --global \   --load-balancing-scheme=INTERNAL_SELF_MANAGED \   --protocol=GRPC \   --health-checks grpc-gke-helloworld-hc
  4. Attach the NEG to the backend service:

    gcloud compute backend-services add-backend grpc-gke-helloworld-service \   --global \   --network-endpoint-group example-grpc-server \   --network-endpoint-group-zone ${ZONE} \   --balancing-mode RATE \   --max-rate-per-endpoint 5

Create the routing rule map

This is similar to how youcreate a routing rule map inCloud Service Mesh setup with Google Kubernetes Engine and proxyless gRPC services.

  1. Create the URL map:

    gcloud compute url-maps create grpc-gke-url-map \   --default-service grpc-gke-helloworld-service
  2. Add the path matcher to the URL map:

    gcloud compute url-maps add-path-matcher grpc-gke-url-map \   --default-service grpc-gke-helloworld-service \   --path-matcher-name grpc-gke-path-matcher \   --new-hosts helloworld-gke:8000
  3. Create the target gRPC proxy:

    gcloud compute target-grpc-proxies create grpc-gke-proxy \   --url-map grpc-gke-url-map --validate-for-proxyless
  4. Create the forwarding rule:

    gcloud compute forwarding-rules create grpc-gke-forwarding-rule \  --global \  --load-balancing-scheme=INTERNAL_SELF_MANAGED \  --address=0.0.0.0 \  --target-grpc-proxy=grpc-gke-proxy \  --ports 8000 \  --network default

Configure Cloud Service Mesh with proxyless gRPC Security

This example demonstrates how to configure mTLS on the client and server sides.

Format for policy references

Note the following required format for referring to server TLS and client TLSpolicies:

projects/PROJECT_ID/locations/global/[serverTlsPolicies|clientTlsPolicies]/[server-tls-policy|client-mtls-policy]

For example:

projects/PROJECT_ID/locations/global/serverTlsPolicies/server-tls-policy
projects/PROJECT_ID/locations/global/clientTlsPolicies/client-mtls-policy

Configure mTLS on the server side

First, you create a server TLS policy. The policy asks the gRPC server side touse thecertificateProvicerInstance plugin config identified by the namegoogle_cloud_private_spiffe for the identity certificate, which is part of theserverCertificate. ThemtlsPolicy section indicates mTLS security and usesthe samegoogle_cloud_private_spiffe as the plugin config forclientValidationCa, which is the root (validation) certificate specification.

Next, you create an endpoint policy. This specifies that a backend, for examplea gRPC server, using port50051 with any or no metadata labels, receives theattached server TLS policy namedserver-mtls-policy. You specify metadatalabels usingMATCH_ALL. You create the endpoint policy with a temporary fileep-mtls-psms.yaml that contains the values for the endpoint policy resourceusing the policy that you already defined.

  1. Create a temporary fileserver-mtls-policy.yaml in the current directorywith the values of the server TLS policy resource:

    name: "projects/${PROJECT_ID}/locations/global/serverTlsPolicies/server-mtls-policy"serverCertificate:  certificateProviderInstance:    pluginInstance: google_cloud_private_spiffemtlsPolicy:  clientValidationCa:  - certificateProviderInstance:      pluginInstance: google_cloud_private_spiffe
  2. Create a server TLS policy resource calledserver-mtls-policy by importingthe temporary fileserver-mtls-policy.yaml:

    gcloud network-security server-tls-policies import server-mtls-policy \  --source=server-mtls-policy.yaml --location=global
  3. Create the endpoint policy by creating the temporary fileep-mtls-psms.yaml:

    name: "ep-mtls-psms"type: "GRPC_SERVER"serverTlsPolicy: "projects/${PROJECT_ID}/locations/global/serverTlsPolicies/server-mtls-policy"trafficPortSelector:  ports:  - "50051"endpointMatcher:  metadataLabelMatcher:    metadataLabelMatchCriteria: "MATCH_ALL"    metadataLabels:    - labelName: app      labelValue: helloworld
  4. Create the endpoint policy resource by importing the fileep-mtls-psms.yaml:

    gcloud beta network-services endpoint-policies import ep-mtls-psms \  --source=ep-mtls-psms.yaml --location=global

Configure mTLS on the client side

The client-side security policy is attached to the backend service. When a clientaccesses a backend (the gRPC server) through the backend service, the attachedclient-side security policy is sent to the client.

  1. Create the client TLS policy resource contents in a temporary file calledclient-mtls-policy.yaml in the current directory:

    name: "client-mtls-policy"clientCertificate:  certificateProviderInstance:    pluginInstance: google_cloud_private_spiffeserverValidationCa:- certificateProviderInstance:    pluginInstance: google_cloud_private_spiffe
  2. Create the client TLS policy resource calledclient-mtls-policy by importingthe temporary fileclient-mtls-policy.yaml:

    gcloud network-security client-tls-policies import client-mtls-policy \  --source=client-mtls-policy.yaml --location=global
  3. Create a snippet in a temporary file to reference this policy andadd details forsubjectAltNames in theSecuritySettings message as in thefollowing example. Replace${PROJECT_ID} with your project ID value, which isthe value of the${PROJECT_ID} environment variable described previously. Notethatexample-grpc-server insubjectAltNames is the Kubernetes serviceaccount name that is used for the gRPC server Pod in the deployment spec.

    if [ -z "$PROJECT_ID" ] ; then echo Please make sure PROJECT_ID is set. ; ficat << EOF > client-security-settings.yamlsecuritySettings:  clientTlsPolicy: projects/${PROJECT_ID}/locations/global/clientTlsPolicies/client-mtls-policy  subjectAltNames:    - "spiffe://${PROJECT_ID}.svc.id.goog/ns/default/sa/example-grpc-server"EOF
  4. Add thesecuritySettings message to the backend service you alreadycreated. These steps export the current backend service contents, add theclientsecuritySetting message and re-importing the new content to updatethe backend service.

    gcloud compute backend-services export grpc-gke-helloworld-service --global \  --destination=/tmp/grpc-gke-helloworld-service.yamlcat /tmp/grpc-gke-helloworld-service.yaml client-security-settings.yaml \  >/tmp/grpc-gke-helloworld-service1.yamlgcloud compute backend-services import grpc-gke-helloworld-service --global \  --source=/tmp/grpc-gke-helloworld-service1.yaml -q

Verify the configuration

Cloud Service Mesh configuration is now complete, including server- andclient-side security. Next, you prepare and run the server and client workloads.This completes the example.

Create a proxyless gRPC client

This step is similar to the previous sectionCreating a proxyless gRPC service.You use the xDS-enabledhelloworld client from the xDS exampledirectory in thegrpc-java repository. You build and run the client in a containerbuilt from anopenjdk:8-jdk image. The gRPC client Kubernetes spec does thefollowing.

  • It creates a Kubernetes service accountexample-grpc-client that is used bythe gRPC client Pod.
  • ${PROJNUM} represents the project number of your project and needs to bereplaced with the actual number.

Add the following annotation to your Pod spec:

  annotations:    security.cloud.google.com/use-workload-certificates: ""

On creation, each Pod gets a volume at/var/run/secrets/workload-spiffe-credentials.This volume contains the following:

  • private_key.pem is an automatically generated private key.
  • certificates.pem is a bundle of PEM-formatted certificates that can bepresented to another Pod as the client certificate chain, or used as a servercertificate chain.
  • ca_certificates.pem is a bundle of PEM-formatted certificates to use as trustanchors when validating the client certificate chain presented by another Pod,or the server certificate chain received when connecting to another Pod.

Note thatca_certificates.pem contains the root certificates for the localtrust domain for the workloads, which is the cluster's workload pool.

The leaf certificate incertificates.pem contains the following plain-textSPIFFE identity assertion:

spiffe://WORKLOAD_POOL/ns/NAMESPACE/sa/KUBERNETES_SERVICE_ACCOUNT

In this assertion:

  • WORKLOAD_POOL is the name of the cluster workload pool.
  • NAMESPACE is the name of your Kubernetes service account.
  • KUBERNETES_SERVICE_ACCOUNT is the namespace of your Kubernetes service account.

The following instructions for your language create the spec to use in thisexample.

Java

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the following spec:

    cat << EOF > example-grpc-client.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: example-grpc-client  namespace: default  annotations:    iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: apps/v1kind: Deploymentmetadata:  name: example-grpc-client  namespace: default  labels:    k8s-app: example-grpc-clientspec:  replicas: 1  selector:    matchLabels:      k8s-app: example-grpc-client  strategy: {}  template:    metadata:      annotations:        security.cloud.google.com/use-workload-certificates: ""      labels:        k8s-app: example-grpc-client    spec:      containers:      - image: openjdk:8-jdk        imagePullPolicy: IfNotPresent        name: example-grpc-client        command:        - /bin/sleep        - inf        env:        - name: GRPC_XDS_BOOTSTRAP          value: "/tmp/grpc-xds/td-grpc-bootstrap.json"        resources:          limits:            cpu: 800m            memory: 512Mi          requests:            cpu: 100m            memory: 512Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/grpc-xds/      initContainers:      - name: grpc-td-init        image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0        imagePullPolicy: Always        args:        - --output        - "/tmp/bootstrap/td-grpc-bootstrap.json"        resources:          limits:            cpu: 100m            memory: 100Mi          requests:            cpu: 10m            memory: 100Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/bootstrap/      serviceAccountName: example-grpc-client      volumes:      - name: grpc-td-conf        emptyDir:          medium: MemoryEOF

C++

  1. Run the following command to ensure that the project number is correctly set:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the following spec:

    cat << EOF > example-grpc-client.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: example-grpc-client  namespace: default  annotations:    iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: apps/v1kind: Deploymentmetadata:  name: example-grpc-client  namespace: default  labels:    k8s-app: example-grpc-clientspec:  replicas: 1  selector:    matchLabels:      k8s-app: example-grpc-client  strategy: {}  template:    metadata:      annotations:        security.cloud.google.com/use-workload-certificates: ""      labels:        k8s-app: example-grpc-client    spec:      containers:      - image: phusion/baseimage:18.04-1.0.0        imagePullPolicy: IfNotPresent        name: example-grpc-client        command:        - /bin/sleep        - inf        env:        - name: GRPC_XDS_BOOTSTRAP          value: "/tmp/grpc-xds/td-grpc-bootstrap.json"        resources:          limits:            cpu: 8            memory: 8Gi          requests:            cpu: 300m            memory: 512Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/grpc-xds/      initContainers:      - name: grpc-td-init        image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0        imagePullPolicy: Always        args:        - --output        - "/tmp/bootstrap/td-grpc-bootstrap.json"        resources:          limits:            cpu: 100m            memory: 100Mi          requests:            cpu: 10m            memory: 100Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/bootstrap/      serviceAccountName: example-grpc-client      volumes:      - name: grpc-td-conf        emptyDir:          medium: MemoryEOF

Python

  1. Run the following command to ensure that the project number is correctlyset:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the following spec:

    cat << EOF > example-grpc-client.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: example-grpc-client  namespace: default  annotations:    iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: apps/v1kind: Deploymentmetadata:  name: example-grpc-client  namespace: default  labels:    k8s-app: example-grpc-clientspec:  replicas: 1  selector:    matchLabels:      k8s-app: example-grpc-client  strategy: {}  template:    metadata:      annotations:        security.cloud.google.com/use-workload-certificates: ""      labels:        k8s-app: example-grpc-client    spec:      containers:      - image: phusion/baseimage:18.04-1.0.0        imagePullPolicy: IfNotPresent        name: example-grpc-client        command:        - /bin/sleep        - inf        env:        - name: GRPC_XDS_BOOTSTRAP          value: "/tmp/grpc-xds/td-grpc-bootstrap.json"        resources:          limits:            cpu: 8            memory: 8Gi          requests:            cpu: 300m            memory: 512Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/grpc-xds/      initContainers:      - name: grpc-td-init        image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0        imagePullPolicy: Always        args:        - --output        - "/tmp/bootstrap/td-grpc-bootstrap.json"        resources:          limits:            cpu: 100m            memory: 100Mi          requests:            cpu: 10m            memory: 100Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/bootstrap/      serviceAccountName: example-grpc-client      volumes:      - name: grpc-td-conf        emptyDir:          medium: MemoryEOF

Go

  1. Run the following command to ensure that the project number is correctlyset:

    if [ -z "$PROJNUM" ] ; then export PROJNUM=$(gcloud projects describe $(gcloud info --format='value(config.project)') --format="value(projectNumber)") ; fi ; echo $PROJNUM
  2. Create the following spec:

    cat << EOF > example-grpc-client.yamlapiVersion: v1kind: ServiceAccountmetadata:  name: example-grpc-client  namespace: default  annotations:    iam.gke.io/gcp-service-account: ${PROJNUM}-compute@developer.gserviceaccount.com---apiVersion: apps/v1kind: Deploymentmetadata:  name: example-grpc-client  namespace: default  labels:    k8s-app: example-grpc-clientspec:  replicas: 1  selector:    matchLabels:      k8s-app: example-grpc-client  strategy: {}  template:    metadata:      annotations:        security.cloud.google.com/use-workload-certificates: ""      labels:        k8s-app: example-grpc-client    spec:      containers:      - image: golang:1.16-alpine        imagePullPolicy: IfNotPresent        name: example-grpc-client        command:        - /bin/sleep        - inf        env:        - name: GRPC_XDS_BOOTSTRAP          value: "/tmp/grpc-xds/td-grpc-bootstrap.json"        resources:          limits:            cpu: 8            memory: 8Gi          requests:            cpu: 300m            memory: 512Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/grpc-xds/      initContainers:      - name: grpc-td-init        image: gcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0        imagePullPolicy: Always        args:        - --output        - "/tmp/bootstrap/td-grpc-bootstrap.json"        resources:          limits:            cpu: 100m            memory: 100Mi          requests:            cpu: 10m            memory: 100Mi        volumeMounts:        - name: grpc-td-conf          mountPath: /tmp/bootstrap/      serviceAccountName: example-grpc-client      volumes:      - name: grpc-td-conf        emptyDir:          medium: MemoryEOF

Complete the process as follows.

  1. Apply the spec:

    kubectl apply -f example-grpc-client.yaml
  2. Grant the required roles to the service account:

    gcloud iam service-accounts add-iam-policy-binding \  --role roles/iam.workloadIdentityUser \  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/example-grpc-client]" \  ${PROJNUM}-compute@developer.gserviceaccount.comgcloud projects add-iam-policy-binding ${PROJECT_ID} \  --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/example-grpc-client]" \  --role roles/trafficdirector.client
  3. Verify that the client Pod is running:

    kubectl get pods

    The command returns text similar to the following:

    NAMESPACE   NAME                                    READY   STATUS    RESTARTS   AGEdefault     example-grpc-client-7c969bb997-9fzjv    1/1     Running   0          104s[..skip..]

Run the server

Build and run the xDS-enabledhelloworld server in the server Pod you createdearlier.

Java

  1. Get the name of the Pod created for theexample-grpc-server service:

    kubectl get pods | grep example-grpc-server

    You see feedback such as the following:

    default    example-grpc-server-77548868d-l9hmf     1/1    Running   0     105s
  2. Open a shell to the server Pod:

    kubectl exec -it example-grpc-server-77548868d-l9hmf -- /bin/bash
  3. In the shell, verify that the bootstrap file at/tmp/grpc-xds/td-grpc-bootstrap.json matches the schema described in thesectionBootstrap file.

  4. Download gRPC Java version 1.42.1 and build thexds-hello-world serverapplication.

    curl -L https://github.com/grpc/grpc-java/archive/v1.42.1.tar.gz | tar -xzcd grpc-java-1.42.1/examples/example-xds../gradlew --no-daemon installDist
  5. Run the server with the--xds-creds flag to indicate xDS-enabled security,using50051 as the listening port, andxds-server as the serveridentification name:

    ./build/install/example-xds/bin/xds-hello-world-server --xds-creds 50051 xds-server
  6. After the server obtains the necessary configuration fromCloud Service Mesh, you see the following output:

    Listening on port 50051plain text health service listening on port 50052

C++

  1. Get the name of the Pod created for theexample-grpc-server service:

    kubectl get pods | grep example-grpc-server

    You see feedback such as the following:

    default    example-grpc-server-77548868d-l9hmf     1/1    Running   0     105s
  2. Open a shell to the server Pod:

    kubectl exec -it example-grpc-server-77548868d-l9hmf -- /bin/bash
  3. In the shell, verify that the bootstrap file at/tmp/grpc-xds/td-grpc-bootstrap.json matches the schema described in thesectionBootstrap file.

  4. Download gRPC C++ and build thexds-hello-world serverapplication.

    apt-get update -y && \        apt-get install -y \            build-essential \            clang \            python3 \            python3-devcurl -L https://github.com/grpc/grpc/archive/master.tar.gz | tar -xzcd grpc-mastertools/bazel build examples/cpp/helloworld:xds_greeter_server
  5. Run the server using50051 as the listening port, andxds_greeter_serveras the server identification name:

    bazel-bin/examples/cpp/helloworld/xds_greeter_server --port=50051 --maintenance_port=50052 --secure

    To run the server without credentials, you can specify the following:

    bazel-bin/examples/cpp/helloworld/xds_greeter_server --nosecure
  6. After the server obtains the necessary configuration fromCloud Service Mesh, you see the following output:

    Listening on port 50051plain text health service listening on port 50052

Python

  1. Get the name of the Pod created for theexample-grpc-server service:

    kubectl get pods | grep example-grpc-server

    You see feedback such as the following:

    default    example-grpc-server-77548868d-l9hmf     1/1    Running   0     105s
  2. Open a shell to the server Pod:

    kubectl exec -it example-grpc-server-77548868d-l9hmf -- /bin/bash
  3. In the shell, verify that the bootstrap file at/tmp/grpc-xds/td-grpc-bootstrap.json matches the schema described in thesectionBootstrap file.

  4. Download gRPC Python version 1.41.0 and build the example applicationn.

    apt-get update -y
    apt-get install -y python3 python3-pip
    curl -L https://github.com/grpc/grpc/archive/v1.41.x.tar.gz | tar -xz
    cd grpc-1.41.x/examples/python/xds/
    python3 -m virtualenv venv
    source venv/bin/activate
    python3 -m pip install -r requirements.txt

  5. Run the server with the--xds-creds flag to indicate xDS-enabledsecurity, using50051 as the listening port.

    python3 server.py 50051 --xds-creds
  6. After the server obtains the necessary configuration fromCloud Service Mesh, you see the following output:

    2021-05-06 16:10:34,042: INFO     Running with xDS Server credentials2021-05-06 16:10:34,043: INFO     Greeter server listening on port 500512021-05-06 16:10:34,046: INFO     Maintenance server listening on port 50052

Go

  1. Get the name of the Pod created for theexample-grpc-server service:

    kubectl get pods | grep example-grpc-server

    You see feedback such as the following:

    default    example-grpc-server-77548868d-l9hmf     1/1    Running   0     105s
  2. Open a shell to the server Pod:

    kubectl exec -it example-grpc-server-77548868d-l9hmf -- /bin/sh
  3. In the shell, verify that the bootstrap file at/tmp/grpc-xds/td-grpc-bootstrap.json matches the schema described in thesectionBootstrap file.

  4. Download gRPC Go version 1.41.0 and navigate to the directory containingthexds-hello-world server application.

    apk add curlcurl -L https://github.com/grpc/grpc-go/archive/v1.42.0.tar.gz | tar -xzcd grpc-go-1.42.0/examples/features/xds/server
  5. Build and run the server with the--xds_creds flag to indicatexDS-enabled security, using50051 as the listening port:

    GRPC_GO_LOG_VERBOSITY_LEVEL=2 GRPC_GO_LOG_SEVERITY="info" \  go run main.go \  -xds_creds \  -port 50051
  6. After the server obtains the necessary configuration fromCloud Service Mesh, you see the following output:

    Using xDS credentials...Serving GreeterService on 0.0.0.0:50051 and HealthService on 0.0.0.0:50052

The health checking process takes from 3 to 5 minutes to show that your serviceis healthy after the server starts.

Run the client and verify the configuration

Build and run the xDS-enabledhelloworld client in the client Pod you createdearlier.

Java

  1. Get the name of the client Pod:

    kubectl get pods | grep example-grpc-client

    You see feedback such as this:

    default    example-grpc-client-7c969bb997-9fzjv     1/1    Running   0     105s
  2. Open a shell to the client Pod:

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/bash
  3. In the command shell, download gRPC Java version 1.42.1 and buildthexds-hello-world client application.

    curl -L https://github.com/grpc/grpc-java/archive/v1.42.1.tar.gz | tar -xzcd grpc-java-1.42.1/examples/example-xds../gradlew --no-daemon installDist
  4. Run the client with the--xds-creds flag to indicate xDS-enabled security,client name, and target connection string:

    ./build/install/example-xds/bin/xds-hello-world-client --xds-creds xds-client \      xds:///helloworld-gke:8000

    You should see output similar to this:

    Greeting: Hello xds-client, from xds-server

C++

  1. Get the name of the client Pod:

    kubectl get pods | grep example-grpc-client

    You see feedback such as this:

    default    example-grpc-client-7c969bb997-9fzjv     1/1    Running   0     105s
  2. Open a shell to the client Pod:

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/bash
  3. After you are inside the shell, download gRPC C++ and buildthexds-hello-world client application.

    apt-get update -y && \        apt-get install -y \            build-essential \            clang \            python3 \            python3-dev
    curl -L https://github.com/grpc/grpc/archive/master.tar.gz | tar -xz
    cd grpc-master
    tools/bazel build examples/cpp/helloworld:xds_greeter_client
  4. Run the client with the--xds-creds flag to indicate xDS-enabled security,client name, and target connection string:

    bazel-bin/examples/cpp/helloworld/xds_greeter_client --target=xds:///helloworld-gke:8000

    To run the client without credentials, use the following:

    bazel-bin/examples/cpp/helloworld/xds_greeter_client --target=xds:///helloworld-gke:8000 --nosecure

    You should see output similar to this:

    Greeter received: Hello world

Python

  1. Get the name of the client Pod:

    kubectl get pods | grep example-grpc-client

    You see feedback such as this:

    default    example-grpc-client-7c969bb997-9fzjv     1/1    Running   0     105s
  2. Open a shell to the client Pod:

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/bash
  3. After you are inside the shell, download gRPC Python version 1.41.0 andbuild the example client application.

    apt-get update -yapt-get install -y python3 python3-pippython3 -m pip install virtualenvcurl -L https://github.com/grpc/grpc/archive/v1.41.x.tar.gz | tar -xzcd grpc-1.41.x/examples/python/xds/python3 -m virtualenv venvsource venv/bin/activatepython3 -m pip install -r requirements.txt
  4. Run the client with the--xds-creds flag to indicate xDS-enabled security,client name, and target connection string:

    python3 client.py xds:///helloworld-gke:8000 --xds-creds

    You should see output similar to this:

    Greeter client received: Hello you from example-host!

Go

  1. Get the name of the client Pod:

    kubectl get pods | grep example-grpc-client

    You see feedback such as this:

    default    example-grpc-client-7c969bb997-9fzjv     1/1    Running   0     105s
  2. Open a shell to the client Pod:

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/sh
  3. After you are inside the shell, download gRPC Go version 1.42.0 andnavigate to the directory containing thexds-hello-world clientapplication.

    apk add curlcurl -L https://github.com/grpc/grpc-go/archive/v1.42.0.tar.gz | tar -xzcd grpc-go-1.42.0/examples/features/xds/client
  4. Build and run the client with the--xds_creds flag to indicate xDS-enabled security, client name, and target connection string:

    GRPC_GO_LOG_VERBOSITY_LEVEL=2 GRPC_GO_LOG_SEVERITY="info" \  go run main.go \  -xds_creds \  -name xds-client \  -target xds:///helloworld-gke:8000

    You should see output similar to this:

    Greeting: Hello xds-client, from example-grpc-server-77548868d-l9hmf

Configure service-level access with an authorization policy

gRFC A41 support is required for authorization policy support. You can find therequired language versions ongithub

Use these instructions to configure service-level access with authorizationpolicies. Before you create authorization policies, read the caution inRestrict access using authorization.

To make it easier to verify the configuration, create an additional hostnamethat the client can use to refer to thehelloworld-gke service.

gcloud compute url-maps add-host-rule grpc-gke-url-map \   --path-matcher-name grpc-gke-path-matcher \   --hosts helloworld-gke-noaccess:8000

The following instructions create an authorization policy that allows requeststhat are sent by theexample-grpc-client account in which the hostname ishelloworld-gke:8000 and the port is50051.

gcloud

  1. Create an authorization policy by creating a file calledhelloworld-gke-authz-policy.yaml.

    action: ALLOWname: helloworld-gke-authz-policyrules:- sources:  - principals:    - spiffe://PROJECT_ID.svc.id.goog/ns/default/sa/example-grpc-client  destinations:  - hosts:    - helloworld-gke:8000    ports:    - 50051
  2. Import the policy.

    gcloud network-security authorization-policies import \  helloworld-gke-authz-policy \  --source=helloworld-gke-authz-policy.yaml \  --location=global
  3. Update the endpoint policy to reference the new authorization policy byappending the following to the fileep-mtls-psms.yaml.

    authorizationPolicy: projects/${PROJECT_ID}/locations/global/authorizationPolicies/helloworld-gke-authz-policy

    The endpoint policy now specifies that both mTLS and the authorizationpolicy must be enforced on inbound requests to Pods whose gRPC bootstrapfiles contain the labelapp:helloworld.

  4. Import the policy:

    gcloud network-services endpoint-policies import ep-mtls-psms \  --source=ep-mtls-psms.yaml --location=global

Validate the authorization policy

Use these instructions to confirm that the authorization policy is workingcorrectly.

Java

  1. Open a shell to the client pod you used previously.

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/bash
  2. In the command shell, run the following commands to validate the setup.

    cd grpc-java-1.42.1/examples/example-xds./build/install/example-xds/bin/xds-hello-world-client --xds-creds xds-client \      xds:///helloworld-gke:8000

    You should see output similar to this:

    Greeting: Hello xds-client, from xds-server
  3. Run the client again with the alternative server name. Note that this isa failure case. The request is invalid because the authorization policyonly allows access to thehelloworld-gke:8000 hostname.

    ./build/install/example-xds/bin/xds-hello-world-client --xds-creds xds-client \      xds:///helloworld-gke-noaccess:8000

    You should see output similar to this:

    WARNING: RPC failed: Status{code=PERMISSION_DENIED}

    If you don't see this output, the authorization policy might not be inuse yet. Wait a few minutes and try the entire verification processagain.

Go

  1. Open a shell to the client pod you used previously.

    kubectl exec -it example-grpc-client-7c969bb997-9fzjv -- /bin/bash
  2. In the command shell, run the following commands to validate the setup.

    cd grpc-go-1.42.0/examples/features/xds/clientGRPC_GO_LOG_VERBOSITY_LEVEL=2 GRPC_GO_LOG_SEVERITY="info" \  go run main.go \  -xds_creds \  -name xds-client \  -target xds:///helloworld-gke:8000

    You should see output similar to this:

    Greeting: Hello xds-client, from example-grpc-server-77548868d-l9hmf
  3. Run the client again with the alternative server name. Note that this isa failure case. The request is invalid because the authorization policyonly allows access to thehelloworld-gke:8000 hostname.

    GRPC_GO_LOG_VERBOSITY_LEVEL=2 GRPC_GO_LOG_SEVERITY="info" \  go run main.go \  -xds_creds \  -name xds-client \  -target xds:///helloworld-gke-noaccess:8000

    You should see output similar to this:

    could not greet: rpc error: code = PermissionDenied desc = Incoming RPC is not allowed: rpc error: code = PermissionDenied desc = incoming RPC did not match an allow policyexit status 1

    If you don't see this output, the authorization policy might not be inuse yet. Wait a few minutes and try the entire verification processagain.

Use TLS instead of mTLS

Using TLS in this example requires only a small change.

  1. In theServerTlsPolicy, drop themtlsPolicy:

    cat << EOF > server-tls-policy.yamlname: "server-tls-policy"serverCertificate:  certificateProviderInstance:    pluginInstance: google_cloud_private_spiffeEOF
  2. Use this policy in theEndpointPolicy instead:

    cat << EOF > ep-tls-psms.yamlname: "ep-mtls-psms"type: "GRPC_SERVER"serverTlsPolicy: "projects/${PROJECT_ID}/locations/global/serverTlsPolicies/server-tls-policy"trafficPortSelector:  ports:  - "50051"endpointMatcher:  metadataLabelMatcher:    metadataLabelMatchCriteria: "MATCH_ALL"    metadataLabels: []EOF
  3. TheClientTlsPolicy for mTLS also works in the TLS case but theclientCertificate section of the policy can be dropped since it is notrequired for TLS:

    cat << EOF > client-tls-policy.yamlname: "client-tls-policy"serverValidationCa:- certificateProviderInstance:    pluginInstance: google_cloud_private_spiffeEOF

Use service security with the Wallet example

This section provides a high-level overview of how to enablethe Wallet examplewith service security, for Java, C++, and Go.

Java

You can find the example source code for Java atgithub.The code already usesXdsChannel andXdsServer credentials when youconfigure proxyless security.

These instructions describe configuring the Wallet example with Go. Theprocess is similar for Java. The instructions use a pre-existing Dockerimage that you obtain from theGoogle Cloud container repository.

To create the example, follow these instructions:

  1. Clone the repository and get the files in the directorygRPC examples.
  2. Edit the file00-common-env.sh. Comment out the existing line thatsets the value ofWALLET_DOCKER_IMAGE to the Go Docker image anduncomment the line that sets the value ofWALLET_DOCKER_IMAGE to theJava Docker image.
  3. Create and configure Cloud Router instances, using the instructions inCreate and configure Cloud Router instancesor using the functioncreate_cloud_router_instances in the script10.apis.sh.
  4. Create a cluster usingthe instructions for thehello world example.or the functioncreate_cluster in the script20-cluster.sh.
  5. Create private certificate authorities usingthe instructions forCA Serviceor using the script30-private-ca-setup.sh.
  6. Create Kubernetes resources, including service accounts, namespaces,Kubernetes services, NEGs, and server side deployment for all the services:account,stats,stats_premium,wallet_v1,wallet_v2, using thescript40-k8s-resources.sh.
  7. For each of the services you created, create a health check and backend serviceusingcreate_health_check andcreate_backend_service in the script50-td-components.sh.
  8. Create the Cloud Service Mesh routing components usingcreate_routing_components in the script60-routing-components.sh.
  9. Create the Cloud Service Mesh security components for each backend serviceusingcreate_security_components in the script70-security-components.sh.
  10. Create the Wallet client deployment usingcreate_client_deployment in thescript75-client-deployment.sh.
  11. Verify the configuration by launching your client as described inVerify with grpc-wallet clients.

C++

You can find the example source code for C++ atgithub. The code already usesXdsChannel andXdsServer credentials when you configure proxyless security.

These instructions describe configuring the Wallet example with Go. Theprocess is similar for C++. The instructions use a pre-existing Dockerimage that you obtain from theGoogle Cloud container repository.

To create the example, follow these instructions:

  1. Clone the repository and get the files in the directorygRPC examples.
  2. Edit the file00-common-env.sh. Comment out the existing line thatsets the value ofWALLET_DOCKER_IMAGE to the Go Docker image anduncomment the line that sets the value ofWALLET_DOCKER_IMAGE to theC++ Docker image.
  3. Create and configure Cloud Router instances, using the instructions inCreate and configure Cloud Router instancesor using the functioncreate_cloud_router_instances in the script10.apis.sh.
  4. Create a cluster usingthe instructions for thehello world example.or the functioncreate_cluster in the script20-cluster.sh.
  5. Create private certificate authorities usingthe instructions forCA Serviceor using the script30-private-ca-setup.sh.
  6. Create Kubernetes resources, including service accounts, namespaces,Kubernetes services, NEGs, and server side deployment for all the services:account,stats,stats_premium,wallet_v1,wallet_v2, using thescript40-k8s-resources.sh.
  7. For each of the services you created, create a health check and backend serviceusingcreate_health_check andcreate_backend_service in the script50-td-components.sh.
  8. Create the Cloud Service Mesh routing components usingcreate_routing_components in the script60-routing-components.sh.
  9. Create the Cloud Service Mesh security components for each backend serviceusingcreate_security_components in the script70-security-components.sh.
  10. Create the Wallet client deployment usingcreate_client_deployment in thescript75-client-deployment.sh.
  11. Verify the configuration by launching your client as described inVerify with grpc-wallet clients.

Go

You can find example source code for Go atgithub. The code already usesXdsChannelandXdsServer credentials when you configure proxyless security.

The instructions use a pre-existing Docker image that you obtain from theGoogle Cloud container repository.

To create the example, follow these instructions:

  1. Clone the repository and get the files in the directorygRPC examples.
  2. Edit the file00-common-env.sh to set the correct values for theenvironment variables.
  3. Create and configure Cloud Router instances, using the instructions inCreate and configure Cloud Router instancesor using the functioncreate_cloud_router_instances in the script10.apis.sh.
  4. Create a cluster usingthe instructions for thehello world example.or the functioncreate_cluster in the script20-cluster.sh.
  5. Create private certificate authorities usingthe instructions forCA Serviceor using the script30-private-ca-setup.sh.
  6. Create Kubernetes resources, including service accounts, namespaces,Kubernetes services, NEGs, and server side deployment for all the services:account,stats,stats_premium,wallet_v1,wallet_v2, using thescript40-k8s-resources.sh.
  7. For each of the services you created, create a health check and backend serviceusingcreate_health_check andcreate_backend_service in the script50-td-components.sh.
  8. Create the Cloud Service Mesh routing components usingcreate_routing_components in the script60-routing-components.sh.
  9. Create the Cloud Service Mesh security components for each backend serviceusingcreate_security_components in the script70-security-components.sh.
  10. Create the Wallet client deployment usingcreate_client_deployment in thescript75-client-deployment.sh.
  11. Verify the configuration by launching your client as described inVerify with grpc-wallet clients.

Bootstrap file

The setup process in this guide uses a bootstrap generator to create therequired bootstrap file. This section provides reference information aboutthe bootstrap file itself.

The bootstrap file contains configuration information required by proxyless gRPCcode, including connection information for the xDS server. The bootstrap filecontains security configuration that is required by the proxyless gRPC securityfeature. The gRPC server requires one additional field, as described in thefollowing sections. A sample bootstrap file looks like this:

{  "xds_servers": [    {      "server_uri": "trafficdirector.googleapis.com:443",      "channel_creds": [        {          "type": "google_default"        }      ],      "server_features": [        "xds_v3"      ]    }  ],  "node": {    "cluster": "cluster",    "id": "projects/9876012345/networks/default/nodes/client1",    "metadata": {      "TRAFFICDIRECTOR_GCP_PROJECT_NUMBER": "9876012345",      "TRAFFICDIRECTOR_NETWORK_NAME": "default",      "INSTANCE_IP": "10.0.0.3"    },    "locality": {      "zone": "us-central1-a"    }  },  "server_listener_resource_name_template": "grpc/server?xds.resource.listening_address=%s",  "certificate_providers": {    "google_cloud_private_spiffe": {      "plugin_name": "file_watcher",      "config": {        "certificate_file": "/var/run/secrets/workload-spiffe-credentials/certificates.pem",        "private_key_file": "/var/run/secrets/workload-spiffe-credentials/private_key.pem",        "ca_certificate_file": "/var/run/secrets/workload-spiffe-credentials/ca_certificates.pem",        "refresh_interval": "600s"      }    }  }}

Updates to the bootstrap file for security service

The following fields reflect modifications related to security and xDS v3 usage:

Theid field insidenode provides a unique identity for the gRPC client toCloud Service Mesh. You must provide the Google Cloud project number andnetwork name using the node ID in the this format:

projects/{project number}/networks/{network name}/nodes/[UNIQUE_ID]

An example for project number 1234 and the default network is:

projects/1234/networks/default/nodes/client1

TheINSTANCE_IP field is the IP address of the Pod, or0.0.0.0 to indicateINADDR_ANY. This field is used by the gRPC server for fetching the Listenerresource from Cloud Service Mesh for server side security.

Security config fields in the bootstrap file

JSON keyTypeValueNotes
server_listener_resource_name_templateStringgrpc/server?xds.resource.listening_address=%sRequired for gRPC servers. gRPC uses this value to compose the resource name for fetching the `Listener` resource from Cloud Service Mesh for server side security and other configuration. gRPC uses this to form the resource name string
certificate_providersJSON structgoogle_cloud_private_spiffeRequired. The value is a JSON struct representing a map of names tocertificate provider instances. A certificate provider instance is used for fetching identity and root certificates. The example bootstap file contains one name:google_cloud_private_spiffe with the certificate provider instance JSON struct as the value. Each certificate provider instance JSON struct has two fields:
  • plugin_name. Required value that identifies the certificate provider plugin to be used as required by gRPC's plugin architecture for certificate providers. gRPC has built-in support for the file-watcher plugin that is used in this setup. The plugin_name isfile_watcher.
  • config. Required value that identifies the JSON configuration blog for thefile_watcher plugin. The schema and content depend on the plugin.

The contents of theconfig JSON structure for thefile_watcher plugin are:

  • certificate_file: Required string. This value is the location of theidentity certificate.
  • private_key_file: Required string. The value is the location of the privatekey file, which should match the identity certificate.
  • ca_certificate_file: Required string. The value is the location of theroot certificate, which is also know as the trust bundle.
  • refresh_interval: Optional string. The value indicates the refresh interval,specified using the string representation of a Duration's JSON mapping.The default value is "600s", a duration of 10 minutes.

Bootstrap generator

The bootstrap generator container image is available atgcr.io/trafficdirector-prod/td-grpc-bootstrap:0.16.0. Its source code isavailable athttps://github.com/GoogleCloudPlatform/traffic-director-grpc-bootstrap.The most commonly used command line options are these:

  • --output: Use this option to specify where the output bootstrap fileis written to, for example, the command--output /tmp/bootstrap/td-grpc-bootstrap.jsongenerates the bootstrap file to/tmp/bootstrap/td-grpc-bootstrap.json inthe Pod's file system.
  • --node-metadata: Use this flag to populate the node metadata inthe bootstrap file. This is required when you use metadata label matchers intheEndpointPolicy where Cloud Service Mesh uses the label dataprovided in the node metadata section of the bootstrap file. The argument issupplied in the form key=value, for example:--node-metadata version=prod --node-metadata type=grpc

These options add the following in the node metadata section of the bootstrapfile:

{  "node": {...    "metadata": {      "version": "prod",      "type": "grpc",...    },...  },...}
Note: Per policy, we encourage users to check for updates of the TD gRPC bootstrap generator yearly.

Delete the deployment

You can optionally run these commands to delete the deployment you createdusing this guide.

To delete the cluster, run this command:

gcloud container clusters deleteCLUSTER_NAME --zoneZONE --quiet

To delete the resources you created, run these commands:

gcloud compute backend-services delete grpc-gke-helloworld-service --global --quietcloud compute network-endpoint-groups delete example-grpc-server --zoneZONE --quietgcloud compute firewall-rules delete grpc-gke-allow-health-checks --quietgcloud compute health-checks delete grpc-gke-helloworld-hc --quietgcloud network-services endpoint-policies delete ep-mtls-psms \    --location=global --quietgcloud network-security authorization-policies delete helloworld-gke-authz-policy \   --location=global --quietgcloud network-security client-tls-policies delete client-mtls-policy \    --location=global --quietgcloud network-security server-tls-policies delete server-tls-policy \    --location=global --quietgcloud network-security server-tls-policies delete server-mtls-policy \    --location=global --quiet

Troubleshooting

Use these instructions to help you resolve problems with your securitydeployment.

Workloads are unable to get config from Cloud Service Mesh

If you see an error similar to this:

PERMISSION_DENIED: Request had insufficient authentication scopes.

Make sure of the following:

  • You created your GKE cluster with the argument--scopes=cloud-platformargument.
  • You assigned theroles/trafficdirector.client to your Kuberneters serviceaccounts.
  • You assigned theroles/trafficdirector.client to your default Google Cloud serviceaccount (${GSA_EMAIL} above).
  • You enabled thetrafficdirector.googleapis.com service (API).

Your gRPC server does not use TLS or mTLS even with correct Cloud Service Mesh configuration

Make sure you specifyGRPC_SERVER in your endpoint policies configuration. Ifyou specifiedSIDECAR_PROXY gRPC ignores the configuration.

You are unable to create the GKE cluster with the requested cluster version

The GKE cluster creation command might fail with an errorsomething like this:

Node version "1.20.5-gke.2000" is unsupported.

Make sure that you are using the argument--release-channel rapid in yourcluster creation command. You need to use the rapid release channel to get thecorrect version for this release.

You see aNo usable endpoint error

If a client can't communicate with the server because of aNo usable endpointerror, the health checker might have marked the server backends as unhealthy.To check the health of the backends, run thisgcloud command:

gcloud compute backend-services get-health grpc-gke-helloworld-service --global

If the command returns the backend status unhealthy, it might be for one ofthese reasons:

  • The firewall was not created or does not contain the correct source IP range.
  • The target tags on your firewall don't match the tags on the cluster youcreated.

Workloads are unable to communicate in the security setup

If your workloads are not able to communicate after you set up security foryour proxyless service mesh, follow these instructions to determine the cause.

  1. Disable proxyless security and eliminate issues in the proxyless servicemesh load balancing use-cases. To disable security in the mesh do one of thefollowing:
    1. Use plain text credentials on the client and server side OR
    2. don't configure security for the backend service and endpoint policy inthe Cloud Service Mesh configuration.

Follow the steps inTroubleshooting proxyless Cloud Service Mesh deployments,because there is no security setup in your deployment.

  1. Modify your workloads to use xDS credentials with plain text or insecurecredentials as the fallback credentials. Keep the Cloud Service Meshconfiguration with security disabled as discussed previously. In this case,although gRPC is allowing Cloud Service Mesh to configure security,Cloud Service Mesh does not send security information in which case gRPCshould fall back to plain text (or insecure) credentials which should worksimilarly to the first case previously described. If this case does not work,do the following:

    1. Increase the logging level on both the client and server side so that youcan see the xDS messages exchanged between gRPC and Cloud Service Mesh.
    2. Ensure that Cloud Service Mesh does not have security enabled in the CDSand LDS responses that are sent to the workloads.
    3. Ensure that the workloads are not using TLS or mTLS modes in theirchannels. If you see any log messages related to TLS handshakes, checkyour application source code and make sure that you are using insecure orplain text as your fallback credentials. If the application source code iscorrect, this might be a bug in the gRPC library
  2. Verify that the CA Service integration with GKE is workingcorrectly for your GKE cluster by following the troubleshooting steps in thatUser Guide. Make sure that the certificates and keys provided by that featureare made available in the specified directory,/var/run/secrets/workload-spiffe-credentials/.

  3. Enable TLS (instead of mTLS) in your mesh, as described previously, and restartyour client and server workloads.

    1. Increase the logging level on both the client and server side to be ableto see the xDS messages exchanged between gRPC and Cloud Service Mesh.
    2. Ensure that Cloud Service Mesh has enabled security in the CDS and LDSresponses that are sent to the workloads.

Client fails with aCertificateException and a messagePeer certificate SAN check failed

This indicates a problem with thesubjectAltNames values in theSecuritySettings message. Note that these values are based on the Kubernetesservices you created for your backend service. For every such Kubernetes serviceyou created, there is an associated SPIFFE ID, in this format:

spiffe://${WORKLOAD_POOL}/ns/${K8S_NAMESPACE}/sa/${SERVICE_ACCOUNT}

These values are:

  • WORKLOAD_POOL: The workload pool for the cluster,which is${PROJECT_ID}.svc.id.goog
  • K8S_NAMESPACE: The Kubernetes namespace you used in the deployment of theservice
  • SERVICE_ACCOUNT: The Kubernetes service account you used in the deploymentof the service

For every Kubernetes service you attached to your backend service as a networkendpoint group, make sure that you correctly computed the SPIFFE ID and addedthat SPIFFE ID to thesubjectAltNames field in theSecuritySettings message.

Applications cannot use the mTLS certificates with your gRPC library

If your applications are unable to use the mTLS certificates with your gRPClibrary, do the following:

  1. Verify that the Pod spec contains thesecurity.cloud.google.com/use-workload-certificatesannotation that is described inCreating a proxyless gRPC service with NEGs.

  2. Verify that the files containing the certificate chain along with the leafcertificate, private key, and the trusted CA certificates are accessible atthe following paths from within the Pod:

    1. Certificate chain along with leaf cert:"/var/run/secrets/workload-spiffe-credentials/certificates.pem"
    2. Private key: "/var/run/secrets/workload-spiffe-credentials/private_key.pem"
    3. CA Bundle: "/var/run/secrets/workload-spiffe-credentials/ca_certificates.pem"
  3. If the certificates in the previous step are not available, do the following:

      gcloud privateca subordinates describeSUBORDINATE_CA_POOL_NAME
    --location=LOCATION

    1. Verify that GKE's control plane has the correct IAM role binding,granting it access to CA Service:

      # Get the IAM policy for the CAgcloud privateca roots get-iam-policyROOT_CA_POOL_NAME# Verify that there is an IAM binding granting access in the following format- members:- serviceAccount:service-projnumber@container-engine-robot.iam.gserviceaccount.comrole: roles/privateca.certificateManager# Where projnumber is the project number (e.g. 2915810291) for the GKE cluster.
    2. Verify that the certificate has not expired. This is the certificate chainand leaf certificate at/var/run/secrets/workload-spiffe-credentials/certificates.pem. To check,run this command:

      cat /var/run/secrets/workload-spiffe-credentials/certificates.pem | openssl x509 -text -noout | grep "Not After"

    3. Verify that the key type is supported by your application by runningthis command:

      cat /var/run/secrets/workload-spiffe-credentials/certificates.pem | openssl x509 -text -noout | grep "Public Key Algorithm" -A 3

    4. Verify that your gRPC Java application has the followingkeyAlgorithmin theWorkloadCertificateConfig YAML file:

      keyAlgorithm:    rsa:      modulusSize: 4096
  4. Verify that the CA uses the same key family as the certificate key.

An application's certificate is rejected by the client, server, or peer

  1. Verify that the peer application uses the same trust bundle to verify thecertificate.
  2. Verify that the certificate in use is not expired (certificate chain alongwith leaf cert: "/var/run/secrets/workload-spiffe-credentials/certificates.pem").

Pods remain in a pending state

If the Pods stay in a pending state during the setup process, increase theCPU and memory resources for the Pods in your deployment spec.

Unable to create cluster with the--enable-mesh-certificates flag

Ensure that you are running the latest version of the gcloud CLI:

gcloudcomponentsupdate

Note that the--enable-mesh-certificates flag works only withgcloud beta.

Pods don't start

Pods that use GKE mesh certificates might fail to start if certificate provisioningis failing. This can happen in situations like the following:

  • TheWorkloadCertificateConfig or theTrustConfig is misconfigured ormissing.
  • CSRs aren't being approved.

You can check whether certificate provisioning is failing by checking the Podevents.

  1. Check the status of your Pod:

    kubectlgetpod-nPOD_NAMESPACEPOD_NAME

    Replace the following:

    • POD_NAMESPACE: the namespace of your Pod.
    • POD_NAME: the name of your Pod.
  2. Check recent events for your Pod:

    kubectldescribepod-nPOD_NAMESPACEPOD_NAME
  3. If certificate provisioning is failing, you will see an event withType=Warning,Reason=FailedMount,From=kubelet, and aMessage field thatbegins withMountVolume.SetUp failed for volume "gke-workload-certificates".TheMessage field contains troubleshooting information.

    Events:  Type     Reason       Age                From       Message  ----     ------       ----               ----       -------  Warning  FailedMount  13s (x7 over 46s)  kubelet    MountVolume.SetUp failed for volume "gke-workload-certificates" : rpc error: code = Internal desc = unable to mount volume: store.CreateVolume, err: unable to create volume "csi-4d540ed59ef937fbb41a9bf5380a5a534edb3eedf037fe64be36bab0abf45c9c": caPEM is nil (check active WorkloadCertificateConfig)
  4. See the following troubleshooting steps if the reason your Pods don't startis because of misconfigured objects, or because of rejected CSRs.

WorkloadCertificateConfig orTrustConfig is misconfigured

Ensure that you created theWorkloadCertificateConfig andTrustConfigobjects correctly. You can diagnose misconfigurations on either of theseobjects usingkubectl.

  1. Retrieve the current status.

    ForWorkloadCertificateConfig:

    kubectlgetWorkloadCertificateConfigdefault-oyaml

    ForTrustConfig:

    kubectlgetTrustConfigdefault-oyaml
  2. Inspect the status output. A valid object will have a condition withtype: Ready andstatus: "True".

    status:  conditions:  - lastTransitionTime: "2021-03-04T22:24:11Z"    message: WorkloadCertificateConfig is ready    observedGeneration: 1    reason: ConfigReady    status: "True"    type: Ready

    For invalid objects,status: "False" appears instead. Thereasonandmessage field contain additional troubleshooting details.

CSRs are not approved

If something goes wrong during the CSR approval process, you can check the errordetails in thetype: Approved andtype: Issued conditions of the CSR.

  1. List relevant CSRs usingkubectl:

    kubectlgetcsr\--field-selector='spec.signerName=spiffe.gke.io/spiffe-leaf-signer'
  2. Choose a CSR that is eitherApproved and notIssued, or is notApproved.

  3. Get details for the selected CSR using kubectl:

    kubectlgetcsrCSR_NAME-oyaml

    ReplaceCSR_NAME with the name of the CSR you chose.

A valid CSR has a condition withtype: Approved andstatus: "True", and avalid certificate in thestatus.certificate field:

status:  certificate: <base64-encoded data>  conditions:  - lastTransitionTime: "2021-03-04T21:58:46Z"    lastUpdateTime: "2021-03-04T21:58:46Z"    message: Approved CSR because it is a valid SPIFFE SVID for the correct identity.    reason: AutoApproved    status: "True"    type: Approved

Troubleshooting information for invalid CSRs appears in themessage andreason fields.

Pods are missing certificates

  1. Get the Pod spec for your Pod:

    kubectlgetpod-nPOD_NAMESPACEPOD_NAME-oyaml

    Replace the following:

    • POD_NAMESPACE: the namespace of your Pod.
    • POD_NAME: the name of your Pod.
  2. Verify that the Pod spec contains thesecurity.cloud.google.com/use-workload-certificatesannotation described inConfigure Pods to receive mTLS credentials.

  3. Verify that the GKE mesh certificates admission controller successfullyinjected a CSI driver volume of typeworkloadcertificates.security.cloud.google.cominto your Pod spec:

    volumes:...-csi:  driver: workloadcertificates.security.cloud.google.com  name: gke-workload-certificates...
  4. Check for the presence of a volume mount in each of the containers:

    containers:- name: ...  ...  volumeMounts:  - mountPath: /var/run/secrets/workload-spiffe-credentials    name: gke-workload-certificates    readOnly: true  ...
  5. Verify that the following certificate bundles and the private key areavailable at the following locations in the Pod:

    • Certificate chain bundle:/var/run/secrets/workload-spiffe-credentials/certificates.pem
    • Private key:/var/run/secrets/workload-spiffe-credentials/private_key.pem
    • CA trust anchor bundle:/var/run/secrets/workload-spiffe-credentials/ca_certificates.pem
  6. If the files are not available, perform the following steps:

    1. Retrieve the CA Service (Preview)instance for the cluster:

      kubectlgetworkloadcertificateconfigsdefault-ojsonpath'{.spec.certificateAuthorityConfig.certificateAuthorityServiceConfig.endpointURI}'
    2. Retrieve the status of the CA Service (Preview)instance:

      gcloudprivatecaISSUING_CA_TYPEdescribeISSUING_CA_NAME\--locationISSUING_CA_LOCATION

      Replace the following:

      • ISSUING_CA_TYPE: the issuing CA type, which must be eithersubordinates orroots.
      • ISSUING_CA_NAME: the name of the issuing CA.
      • ISSUING_CA_LOCATION: the region of the issuing CA.
    3. Get the IAM policy for the root CA:

      gcloudprivatecarootsget-iam-policyROOT_CA_NAME

      ReplaceROOT_CA_NAME with the name of your root CA.

    4. In the IAM policy, verify that theprivateca.auditorpolicy binding exists:

      ...- members:  - serviceAccount:service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com  role: roles/privateca.auditor...

      In this example,PROJECT_NUMBER is your cluster's project number.

    5. Get the IAM policy for the subordinate CA:

      gcloudprivatecasubordinatesget-iam-policySUBORDINATE_CA_NAME

      ReplaceSUBORDINATE_CA_NAME with the subordinate CA name.

    6. In the IAM policy, verify that theprivateca.certificateManager policy binding exists:

      ...- members:  - serviceAccount: service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com  role: roles/privateca.certificateManager...

      In this example,PROJECT_NUMBER is your cluster's project number.

Applications cannot use issued mTLS credentials

  1. Verify that the certificate has not expired:

    cat/var/run/secrets/workload-spiffe-credentials/certificates.pem|opensslx509-text-noout|grep"Not After"
  2. Check that the key type you used is supported by your application.

    cat/var/run/secrets/workload-spiffe-credentials/certificates.pem|opensslx509-text-noout|grep"Public Key Algorithm"-A3
  3. Check that the issuing CA uses the same key family as the certificate key.

    1. Get the status of the CA Service (Preview)instance:

      gcloudprivatecaISSUING_CA_TYPEdescribeISSUING_CA_NAME\--locationISSUING_CA_LOCATION

      Replace the following:

      • ISSUING_CA_TYPE: the issuing CA type, which must be eithersubordinates orroots.
      • ISSUING_CA_NAME: the name of the issuing CA.
      • ISSUING_CA_LOCATION: the region of the issuing CA.
    2. Check that thekeySpec.algorithm in the output is the same key algorithmyou defined in theWorkloadCertificateConfig YAML manifest.The output looks like this:

      config:  ...  subjectConfig:    commonName: td-sub-ca    subject:      organization: TestOrgLLC    subjectAltName: {}createTime: '2021-05-04T05:37:58.329293525Z'issuingOptions:  includeCaCertUrl: truekeySpec:  algorithm: RSA_PKCS1_2048_SHA256 ...

Certificates get rejected

  1. Verify that the peer application uses the same trust bundle to verify the certificate.
  2. Verify that the certificate has not expired:

    cat/var/run/secrets/workload-spiffe-credentials/certificates.pem|opensslx509-text-noout|grep"Not After"
  3. Verify that the client code, if not using the gRPC GoCredentials Reloading API, periodically refreshes the credentials from the file system.

  4. Verify that your workloads are in the same trust domain as your CA. GKE mesh certificates supports communication between workloads in a single trust domain.

Limitations

Cloud Service Mesh service security is supported only withGKE. You cannot deploy service security with Compute Engine.

Cloud Service Mesh does not support scenarios where there are two or more endpointpolicy resources that match equally to an endpoint, for example, twopolicies with the same labels and ports, or two or more policies with differentlabels that match equally with an endpoint's labels. For more information on howendpoint policys are matched to an endpoint's labels, see theAPIsfor EndpointPolicy.EndpointMatcher.MetadataLabelMatcher.In such situations, Cloud Service Mesh does not generate security configurationfrom any of the conflicting policies.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.