Regional Cloud Service Mesh

Preview — Regional Cloud Service Mesh

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

With regional isolation, clients connecting to a specific region of theCloud Service Mesh control plane can only access resources within that region.Similarly, API resources within a specific region can only refer to otherresources in that region.

Regional Cloud Service Mesh has the following Limitations:

  • The Istio API not supported. You cannot use Kubernetes with the Istio APIusing regional Traffic Director. Only Google Cloud APIs are supported in thispreview.
  • Theexisting considerations and limitationsof the global service routing APIs apply.
  • The minimum Envoy version to support xdSTP naming schemes is v1.31.1.
  • Gateway for Mesh API is not supported.
  • The minimum gRPC version is v1.65.
  • Only the following regions are supported:

    africa-south1asia-east1asia-east2asia-northeast1asia-northeast2asia-northeast3asia-south1asia-south2asia-southeast1asia-southeast2australia-southeast1australia-southeast2europe-central2europe-north1europe-north2europe-southwest1europe-west10europe-west12europe-west1europe-west2europe-west3europe-west4europe-west6europe-west8europe-west9me-central1me-central2me-west1northamerica-northeast1northamerica-northeast2northamerica-south1southamerica-east1southamerica-west1us-central1us-east1us-east4us-east5us-south1us-west1us-west2us-west3us-west4

Pricing

Each region in which regional Cloud Service Mesh is supported will have aregional SKU when this feature is Generally Available. For now, the pricing is the same asglobal.

Prepare xDS client for Cloud Service Mesh

Compute VM Envoy xDS

Manual

The manual steps build onSet up VMs using manual Envoy deployment.The main difference is thatENVOY_CONTROL_PLANE_REGION is set and injectedinto the bootstrap.

  1. Create the instance template:

    gcloudcomputeinstance-templatescreatetd-vm-templategcloudcomputeinstance-templatescreatetd-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=http-td-tag,http-server,https-server\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script='#! /usr/bin/env bash# Set variablesexportENVOY_CONTROL_PLANE_REGION="us-central1"exportENVOY_USER="envoy"exportENVOY_USER_UID="1337"exportENVOY_USER_GID="1337"exportENVOY_USER_HOME="/opt/envoy"exportENVOY_CONFIG="${ENVOY_USER_HOME}/config.yaml"exportENVOY_PORT="15001"exportENVOY_ADMIN_PORT="15000"exportENVOY_TRACING_ENABLED="false"exportENVOY_XDS_SERVER_CERT="/etc/ssl/certs/ca-certificates.crt"exportENVOY_ACCESS_LOG="/dev/stdout"exportENVOY_NODE_ID="$(cat/proc/sys/kernel/random/uuid)~$(hostname-i)"exportBOOTSTRAP_TEMPLATE="${ENVOY_USER_HOME}/bootstrap_template.yaml"exportGCE_METADATA_SERVER="169.254.169.254/32"exportINTERCEPTED_CIDRS="*"exportGCP_PROJECT_NUMBER=PROJECT_NUMBERexportVPC_NETWORK_NAME=mesh:sidecar-meshexportGCE_ZONE=$(curl-sS-H"Metadata-Flavor: Google"http://metadata.google.internal/computeMetadata/v1/instance/zone|cut-d"/"-f4)# Create system user account for Envoy binarysudogroupadd${ENVOY_USER}\--gid=${ENVOY_USER_GID}\--systemsudoadduser${ENVOY_USER}\--uid=${ENVOY_USER_UID}\--gid=${ENVOY_USER_GID}\--home=${ENVOY_USER_HOME}\--disabled-login\--system# Download and extract the Cloud Service Mesh tar.gz filecd${ENVOY_USER_HOME}sudocurl-sLhttps://storage.googleapis.com/traffic-director/traffic-director-xdsv3.tar.gz-otraffic-director-xdsv3.tar.gzsudotar-xvzftraffic-director-xdsv3.tar.gztraffic-director-xdsv3/bootstrap_template.yaml\-Cbootstrap_template.yaml\--strip-components1sudotar-xvzftraffic-director-xdsv3.tar.gztraffic-director-xdsv3/iptables.sh\-Ciptables.sh\--strip-components1sudormtraffic-director-xdsv3.tar.gz# Generate Envoy bootstrap configurationcat"${BOOTSTRAP_TEMPLATE}"\|sed-e"s|ENVOY_NODE_ID|${ENVOY_NODE_ID}|g"\|sed-e"s|ENVOY_ZONE|${GCE_ZONE}|g"\|sed-e"s|VPC_NETWORK_NAME|${VPC_NETWORK_NAME}|g"\|sed-e"s|CONFIG_PROJECT_NUMBER|${GCP_PROJECT_NUMBER}|g"\|sed-e"s|ENVOY_PORT|${ENVOY_PORT}|g"\|sed-e"s|ENVOY_ADMIN_PORT|${ENVOY_ADMIN_PORT}|g"\|sed-e"s|XDS_SERVER_CERT|${ENVOY_XDS_SERVER_CERT}|g"\|sed-e"s|TRACING_ENABLED|${ENVOY_TRACING_ENABLED}|g"\|sed-e"s|ACCESSLOG_PATH|${ENVOY_ACCESS_LOG}|g"\|sed-e"s|BACKEND_INBOUND_PORTS|${BACKEND_INBOUND_PORTS}|g"\|sed-e"s|trafficdirector.googleapis.com|trafficdirector.${ENVOY_CONTROL_PLANE_REGION}.rep.googleapis.com|g"\|sudotee"${ENVOY_CONFIG}"# Install Envoy binarywget-Oenvoy_keyhttps://apt.envoyproxy.io/signing.keycatenvoy_key|sudogpg--dearmor >$(pwd)/envoy-keyring.gpgecho"deb [arch=$(dpkg--print-architecture) signed-by=$(pwd)/envoy-keyring.gpg] https://apt.envoyproxy.io bullseye main"|sudotee/etc/apt/sources.list.d/envoy.listsudoapt-getupdatesudoapt-getinstallenvoy# Run Envoy as systemd servicesudosystemd-run--uid=${ENVOY_USER_UID}--gid=${ENVOY_USER_GID}\--working-directory=${ENVOY_USER_HOME}--unit=envoy.service\bash-c"/usr/bin/envoy --config-path${ENVOY_CONFIG} | tee"# Configure iptables for traffic interception and redirectionsudo${ENVOY_USER_HOME}/iptables.sh\-p"${ENVOY_PORT}"\-u"${ENVOY_USER_UID}"\-g"${ENVOY_USER_GID}"\-m"REDIRECT"\-i"${INTERCEPTED_CIDRS}"\-x"${GCE_METADATA_SERVER}"

Compute VM gRPC xDS

Similar to global Cloud Service Mesh, gRPC clients need to configure abootstrap in order to tell it how to connect to RegionalCloud Service Mesh.

You can use thegRPC bootstrap generatorto generate this bootstrap. To set it to use regional Cloud Service Mesh,specify a new flag:--xds-server-region.

In this example, settingxds-server-region tous-central1 automaticallydetermines the regional Cloud Service Mesh endpoint:trafficdirector.us-central1.rep.googleapis.com:443.

K8s Manual Envoy Injection

The manual steps build onSet up Google Kubernetes Engine Pods using manual Envoy injection.However, you only need to modify thesection about manual pod injection.

  1. Change the control plane from global to regional:

    wget-q-O-https://storage.googleapis.com/traffic-director/demo/trafficdirector_client_new_api_sample_xdsv3.yamlsed-i"s/PROJECT_NUMBER/PROJECT_NUMBER/g"trafficdirector_client_new_api_sample_xdsv3.yamlsed-i"s/MESH_NAME/MESH_NAME/g"trafficdirector_client_new_api_sample_xdsv3.yamlsed-i"s|trafficdirector.googleapis.com|trafficdirector.${REGION}.rep.googleapis.com|g"trafficdirector_client_new_api_sample_xdsv3.yamlsed-i"s|gcr.io/google-containers/busybox|busybox:stable|g"trafficdirector_client_new_api_sample_xdsv3.yaml
    1. Apply the changes:

      kubectlapply-ftrafficdirector_client_new_api_sample_xdsv3.yaml```

Setup guides

The section covers five independent configurations and deployment models. Theseare all regionalized versions of existing global service routing API setupguides.

Configure Proxyless gRPC services with regional GRPCRoute and regional Cloud Service Mesh

This sections explains how to configure a proxyless gRPC service mesh withregional Cloud Service Mesh and regional GRPCRoute resources.

For your convenience, store the Google Cloud project number you performconfiguration in, so that all examples in this guide can be copy-pasted in thecommand line:

exportPROJECT="PROJECT_NUMBER"exportREGION="us-central1"exportZONE="us-central1-a"

ReplacePROJECT_NUMBER with your project number.

Optionally, you can replace the following:

  • us-central1 with a different region you want to use.
  • us-central1-a with a different zone you want to use.
  • default with a different VPC_NAME.

Mesh configuration

When a proxyless gRPC application connects to anxds://hostname, the gRPCclient library establishes a connection to the Cloud Service Mesh to get therouting configuration required to route requests for the hostname.

  1. Create a Mesh specification and store it in the mesh.yaml file:

    cat<<EOF >mesh.yamlname:grpc-meshEOF
  2. Create a Mesh using the mesh.yaml specification:

    gcloudnetwork-servicesmeshesimportgrpc-mesh\--source=mesh.yaml\--location=${REGION}

    After the regional mesh is created, Cloud Service Mesh is ready to servethe configuration. However, because no services are defined yet, theconfiguration is empty.

gRPC service configuration

For demonstration purposes you will create a regional Backend Service withauto-scaled VMs (usingManaged instance groups - MIG)that will servehello world using the gRPC protocol on port 50051.

  1. Create the Compute Engine VM instance template with ahelloworld gRPCservice that is exposed on port 50051:

    gcloudcomputeinstance-templatescreategrpc-td-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=allow-health-checks\--image-family=debian-11\--image-project=debian-cloud\--metadata-from-file=startup-script=<(echo'#! /bin/bashset -ecd /rootsudo apt-get update -ysudo apt-get install -y openjdk-11-jdk-headlesscurl -L https://github.com/grpc/grpc-java/archive/v1.38.0.tar.gz | tar -xzcd grpc-java-1.38.0/examples/example-hostname../gradlew --no-daemon installDist# Server listens on 50051sudo systemd-run ./build/install/hostname-server/bin/hostname-server')
  2. Create a MIG based on the template:

    gcloudcomputeinstance-groupsmanagedcreategrpc-td-mig-us-central1\--zone=${ZONE}\--size=2\--template=grpc-td-vm-template
  3. Configure a named port for the gRPC service. This is the port on which thegRPC service is configured to listen for requests.

    gcloudcomputeinstance-groupsset-named-portsgrpc-td-mig-us-central1\--named-ports=grpc-helloworld-port:50051\--zone=${ZONE}

    In this example, the port is 50051.

  4. Create gRPC health checks.

    gcloudcomputehealth-checkscreategrpcgrpc-helloworld-health-check\--use-serving-port--region=${REGION}

    The services must implement the gRPChealth checking protocolfor gRPC health checks to function properly. For more information, seeCreating health checks.

  5. Create a firewall rule to allow incoming health check connections toinstances in your network:

    gcloudcomputefirewall-rulescreategrpc-vm-allow-health-checks\--networkdefault--actionallow--directionINGRESS\--source-ranges35.191.0.0/16,209.85.152.0/22,209.85.204.0/22\--target-tagsallow-health-checks\--rulestcp:50051
  6. Create a regionalBackend Servicewith a load balancing scheme ofINTERNAL_SELF_MANAGED and add the healthcheck and a managed instance group created earlier to the Backend Service.The port in the port-name specified is used to connect to the VMs in themanaged instance group.

    gcloudcomputebackend-servicescreategrpc-helloworld-service\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--protocol=GRPC\--port-name=grpc-helloworld-port\--health-checks="https://www.googleapis.com/compute/v1/projects/${PROJECT}/regions/${REGION}/healthChecks/grpc-helloworld-health-check"\--region=${REGION}
  7. Add the managed instance group to theBackendService:

    gcloudcomputebackend-servicesadd-backendgrpc-helloworld-service\--instance-group=grpc-td-mig-us-central1\--instance-group-zone=${ZONE}\--region=${REGION}

Setup routing with regional GRPCRoute

At this point the regional mesh and gRPC server service are configured. Now youcan set up the required routing.

  1. Create regional GRPCRoute specification and store it in the grpc_route.yaml file:

    cat<<EOF >grpc_route.yamlname:helloworld-grpc-routehostnames:-helloworld-gcemeshes:-projects/${PROJECT_NUMBER}/locations/${REGION}/meshes/grpc-meshrules:-action:destinations:-serviceName:projects/${PROJECT_NUMBER}/locations/${REGION}/backendServices/grpc-helloworld-serviceEOF
  2. Create regional GRPCRoute using the grpc_route.yaml specification:

    gcloudnetwork-servicesgrpc-routesimporthelloworld-grpc-route\--source=grpc_route.yaml\--location=${REGION}

    Cloud Service Mesh is now configured to load balance traffic for theservices specified in the gRPC Route across backends in the managed instancegroup.

Create gRPC client service

To verify the configuration, instantiate a client application with a proxylessgRPC data plane. This application must specify (in its bootstrap file) the nameof the mesh.

Once configured, this application can send a request to the instances or endpointsassociated with the helloworld-gce using thexds:///helloworld-gce service URI.

In the following examples, use thegrpcurl tool to test the gRPC service.

  1. Create a client VM:

    gcloudcomputeinstancescreategrpc-client\--zone=${ZONE}\--scopes=https://www.googleapis.com/auth/cloud-platform\--image-family=debian-11\--image-project=debian-cloud\--metadata-from-file=startup-script=<(echo'#! /bin/bashset -exexport PROJECT=PROJECT_NUMBERexport REGION=us-central1export GRPC_XDS_BOOTSTRAP=/run/td-grpc-bootstrap.jsonecho export GRPC_XDS_BOOTSTRAP=$GRPC_XDS_BOOTSTRAP | sudo tee /etc/profile.d/grpc-xds-bootstrap.shcurl -L https://storage.googleapis.com/traffic-director/td-grpc-bootstrap-0.18.0.tar.gz | tar -xz./td-grpc-bootstrap-0.18.0/td-grpc-bootstrap --config-mesh=grpc-mesh --xds-server-uri=trafficdirector.${REGION}.rep.googleapis.com:443 --gcp-project-number=${PROJECT} | sudo tee $GRPC_XDS_BOOTSTRAPsudo sed -i "s|\"authorities\": {|\"authorities\": {\n    \"traffic-director.${REGION}.xds.googleapis.com\": {\"xds_servers\":[{\"server_uri\": \"trafficdirector.${REGION}.rep.googleapis.com:443\", \"channel_creds\": [ { \"type\": \"google_default\" } ], \"server_features\": [ \"xds_v3\", \"ignore_resource_deletion\" ]}], \"client_listener_resource_name_template\": \"xdstp://traffic-director.${REGION}.xds.googleapis.com/envoy.config.listener.v3.Listener/${PROJECT}/mesh:grpc-mesh/%s\"},|g" $GRPC_XDS_BOOTSTRAPsudo sed -i "s|\"client_default_listener_resource_name_template\": \"xdstp://traffic-director-global.xds.googleapis.com|\"client_default_listener_resource_name_template\": \"xdstp://traffic-director.${REGION}.xds.googleapis.com|g" $GRPC_XDS_BOOTSTRAP')

Set up the environment variable and bootstrap file

The client application needs a bootstrap configuration file. The startup scriptin the previous section sets theGRPC_XDS_BOOTSTRAP environment variable anduses a helper script to generate the bootstrap file. The values forTRAFFICDIRECTOR_GCP_PROJECT_NUMBER and zone in the generated bootstrap fileare obtained from the metadata server that knows these details about yourCompute Engine VM instances.

You can provide these values to the helperscript manually by using the-gcp-project-number option. You must provide amesh name that matches the mesh resource using the-config-mesh-experimentaloption.

  1. To verify the configuration, sign in to the client:

    gcloudcomputesshgrpc-client--zone=${ZONE}
  2. Download and install thegrpcurl tool:

    curl-Lhttps://github.com/fullstorydev/grpcurl/releases/download/v1.9.3/grpcurl_1.9.3_linux_x86_64.tar.gz|tar-xz
  3. Run thegrpcurl tool withxds:///helloworld-gce as the service URI andhelloworld.Greeter/SayHello as the service name and method to invoke.

    ./grpcurl--plaintext\-d'{"name": "world"}'\xds:///helloworld-gcehelloworld.Greeter/SayHello

    The parameters to the SayHello method are passed using the -d option.

    You should see output similar to this, whereINSTANCE_NAME is the name ofthe VM instance:

    Greeting: Hello world, from INSTANCE_HOSTNAME

This verifies that the proxyless gRPC client successfully connected toCloud Service Mesh and learned about the backends for the helloworld-gceservice using the xDS name resolver. The client sent a request to one of theservice's backends without needing to know about the IP address or performingDNS resolution.

Configuring Envoy sidecar proxy configuration with HTTP services with regional HTTPRoute and regional Mesh

This section explains how to configure an Envoy proxy-based service mesh withregional mesh and regional HTTPRoute resources.

For your convenience, store the Google Cloud project number you performconfiguration in, so that all examples in this guide can be copy-pasted in thecommand line:

exportPROJECT_ID="PROJECT_ID"exportPROJECT="PROJECT_NUMBER"exportREGION="us-central1"exportZONE="us-central1-a"

Replace the following

  • PROJECT_ID with your project ID.
  • PROJECT_NUMBER with your project number.

Optionally, you can replace the following:

  • us-central1 with a different region you want to use.
  • us-central1-a with a different zone you want to use.

Mesh configuration

The sidecar Envoy proxy receives the service routing configuration fromCloud Service Mesh. The sidecar presents the name of the regional meshresource to identify the specific service mesh configured. The routingconfiguration received from Cloud Service Mesh is used to direct the trafficgoing through the sidecar proxy to various regional Backend Services dependingon request parameters, such as the hostname or headers, configured in the Routeresource(s).

Note that the mesh name is the key that the sidecar proxy uses to request theconfiguration associated with this mesh.

  1. Create regional mesh specification and store it in the mesh.yaml file:

    cat<<EOF >mesh.yamlname:sidecar-meshEOF

    The interception port defaults to 15001 if unspecified.

  2. Create regional mesh using the mesh.yaml specification:

    gcloudnetwork-servicesmeshesimportsidecar-mesh\--source=mesh.yaml\--location=${REGION}

    After the regional mesh is created, Cloud Service Mesh is ready to servethe configuration. However, because no services are defined yet, theconfiguration will be empty.

HTTP server configuration

For demonstration purposes you will create a regional Backend Service withauto-scaled VMs (usingManaged instance groups - MIG)that will serve "hello world" using the gRPC protocol on port 80.

  1. Create the Compute Engine VM instance template with ahelloworld HTTPservice that is exposed on port 80:

    gcloudcomputeinstance-templatescreatetd-httpd-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=http-td-server\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script="#! /bin/bashsudo apt-get update -ysudo apt-get install apache2 -ysudo service apache2 restartecho '<!doctype html><html><body><h1>'\`/bin/hostname\`'</h1></body></html>' | sudo tee /var/www/html/index.html"
  2. Create a MIG based on the template:

    gcloudcomputeinstance-groupsmanagedcreatehttp-td-mig-us-central1\--zone=${ZONE}\--size=2\--template=td-httpd-vm-template
  3. Create the health checks:

    gcloudcomputehealth-checkscreatehttphttp-helloworld-health-check--region=${REGION}
  4. Create a firewall rule to allow incoming health check connections toinstances in your network:

    gcloudcomputefirewall-rulescreatehttp-vm-allow-health-checks\--networkdefault--actionallow--directionINGRESS\--source-ranges35.191.0.0/16,209.85.152.0/22,209.85.204.0/22\--target-tagshttp-td-server\--rulestcp:80
  5. Create a regionalBackend Servicewith a load balancing scheme ofINTERNAL_SELF_MANAGED:

    gcloudcomputebackend-servicescreatehttp-helloworld-service\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--protocol=HTTP\--health-checks="https://www.googleapis.com/compute/v1/projects/${PROJECT}/regions/${REGION}/healthChecks/http-helloworld-health-check"\--region=${REGION}
  6. Add the health check and a managed or unmanaged instance group to the backendservice:

    gcloudcomputebackend-servicesadd-backendhttp-helloworld-service\--instance-group=http-td-mig-us-central1\--instance-group-zone=${ZONE}\--region=${REGION}

    This example uses the managed instance group with the Compute Engine VMtemplate that runs the sample HTTP service we created earlier.

Set up routing with regional HTTPRoute

The mesh resource and HTTP server are configured. You can now connect themusing an HTTPRoute resource that associates a hostname with a Backend Service.

  1. Create HTTPRoute specification and store as http_route.yaml:

    cat<<EOF >http_route.yamlname:helloworld-http-routehostnames:-helloworld-gcemeshes:-projects/${PROJECT_NUMBER}/locations/${REGION}/meshes/sidecar-meshrules:-action:destinations:-serviceName:projects/${PROJECT_NUMBER}/locations/${REGION}/backendServices/http-helloworld-serviceEOF
  2. Create the HTTPRoute using the http_route.yaml specification:

    gcloudnetwork-serviceshttp-routesimporthelloworld-http-route\--source=http_route.yaml\--location=${REGION}

    Cloud Service Mesh is now configured to load balance traffic for the servicesspecified in the HTTPRoute across backends in the managed instance group.

Create HTTP client with Envoy sidecar

In this section you instantiate a client VM with Envoy sidecar proxy to requestCloud Service Mesh configuration created earlier. Note that themeshparameter in the Google Cloud CLI command references the mesh resource created earlier.

gcloudcomputeinstance-templatescreatetd-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=http-td-tag,http-server,https-server\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script='#! /usr/bin/env bash# Set variablesexport ENVOY_CONTROL_PLANE_REGION="us-central1"export ENVOY_USER="envoy"export ENVOY_USER_UID="1337"export ENVOY_USER_GID="1337"export ENVOY_USER_HOME="/opt/envoy"export ENVOY_CONFIG="${ENVOY_USER_HOME}/config.yaml"export ENVOY_PORT="15001"export ENVOY_ADMIN_PORT="15000"export ENVOY_TRACING_ENABLED="false"export ENVOY_XDS_SERVER_CERT="/etc/ssl/certs/ca-certificates.crt"export ENVOY_ACCESS_LOG="/dev/stdout"export ENVOY_NODE_ID="$(cat /proc/sys/kernel/random/uuid)~$(hostname -i)"export BOOTSTRAP_TEMPLATE="${ENVOY_USER_HOME}/bootstrap_template.yaml"export GCE_METADATA_SERVER="169.254.169.254/32"export INTERCEPTED_CIDRS="*"export GCP_PROJECT_NUMBER=PROJECT_NUMBERexport VPC_NETWORK_NAME=mesh:sidecar-meshexport GCE_ZONE=$(curl -sS -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/zone | cut -d"/" -f4)# Create system user account for Envoy binarysudo groupadd ${ENVOY_USER} \  --gid=${ENVOY_USER_GID} \  --systemsudo adduser ${ENVOY_USER} \  --uid=${ENVOY_USER_UID} \  --gid=${ENVOY_USER_GID} \  --home=${ENVOY_USER_HOME} \  --disabled-login \  --system# Download and extract the Cloud Service Mesh tar.gz filecd ${ENVOY_USER_HOME}sudo curl -sL https://storage.googleapis.com/traffic-director/traffic-director-xdsv3.tar.gz -o traffic-director-xdsv3.tar.gzsudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/bootstrap_template.yaml \  -C bootstrap_template.yaml \  --strip-components 1sudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/iptables.sh \  -C iptables.sh \  --strip-components 1sudo rm traffic-director-xdsv3.tar.gz# Generate Envoy bootstrap configurationcat "${BOOTSTRAP_TEMPLATE}" \  | sed -e "s|ENVOY_NODE_ID|${ENVOY_NODE_ID}|g" \  | sed -e "s|ENVOY_ZONE|${GCE_ZONE}|g" \  | sed -e "s|VPC_NETWORK_NAME|${VPC_NETWORK_NAME}|g" \  | sed -e "s|CONFIG_PROJECT_NUMBER|${GCP_PROJECT_NUMBER}|g" \  | sed -e "s|ENVOY_PORT|${ENVOY_PORT}|g" \  | sed -e "s|ENVOY_ADMIN_PORT|${ENVOY_ADMIN_PORT}|g" \  | sed -e "s|XDS_SERVER_CERT|${ENVOY_XDS_SERVER_CERT}|g" \  | sed -e "s|TRACING_ENABLED|${ENVOY_TRACING_ENABLED}|g" \  | sed -e "s|ACCESSLOG_PATH|${ENVOY_ACCESS_LOG}|g" \  | sed -e "s|BACKEND_INBOUND_PORTS|${BACKEND_INBOUND_PORTS}|g" \  | sed -e "s|trafficdirector.googleapis.com|trafficdirector.${ENVOY_CONTROL_PLANE_REGION}.rep.googleapis.com|g" \  | sudo tee "${ENVOY_CONFIG}"# Install Envoy binarywget -O envoy_key https://apt.envoyproxy.io/signing.keycat envoy_key | sudo gpg --dearmor > $(pwd)/envoy-keyring.gpgecho "deb [arch=$(dpkg --print-architecture) signed-by=$(pwd)/envoy-keyring.gpg] https://apt.envoyproxy.io bullseye main" | sudo tee /etc/apt/sources.list.d/envoy.listsudo apt-get updatesudo apt-get install envoy# Run Envoy as systemd servicesudo systemd-run --uid=${ENVOY_USER_UID} --gid=${ENVOY_USER_GID} \  --working-directory=${ENVOY_USER_HOME} --unit=envoy.service \  bash -c "/usr/bin/envoy --config-path ${ENVOY_CONFIG} | tee"# Configure iptables for traffic interception and redirectionsudo ${ENVOY_USER_HOME}/iptables.sh \  -p "${ENVOY_PORT}" \ -u "${ENVOY_USER_UID}" \  -g "${ENVOY_USER_GID}" \  -m "REDIRECT" \  -i "${INTERCEPTED_CIDRS}" \  -x "${GCE_METADATA_SERVER}"'gcloudcomputeinstancescreatetd-vm-client\--zone=${ZONE}\--source-instance-templatetd-vm-template
  1. Login to the created VM:

    gcloudcomputesshtd-vm-client--zone=${ZONE}
  2. Verify HTTP connectivity to the created test services:

    curl-H"Host: helloworld-gce"http://10.0.0.1/

    The command returns a response from one of the VMs in the Managed InstanceGroup with its hostname printed to the console.

Configuring TCP services with regional TCPRoute

This configuration flow is very similar toSet up Envoy proxies with HTTP serviceswith an exception that Backend Service provides a TCP service and routing basedon the TCP/IP parameters is used rather than based on the HTTP protocol.

For your convenience, store the Google Cloud project number you performconfiguration in, so that all examples in this guide can be copy-pasted in thecommand line:

exportPROJECT_ID="PROJECT_ID"exportPROJECT="PROJECT_NUMBER"exportREGION="us-central1"exportZONE="us-central1-a"

Replace the following

  • PROJECT_ID with your project ID.
  • PROJECT_NUMBER with your project number.

Optionally, you can replace the following:

  • us-central1 with a different region you want to use.
  • us-central1-a with a different zone you want to use.

Mesh configuration

  1. Create regional mesh specification and store it in the mesh.yaml file:

    cat<<EOF >mesh.yamlname:sidecar-meshEOF
  2. Create regional mesh using the mesh.yaml specification:

    gcloudnetwork-servicesmeshesimportsidecar-mesh\--source=mesh.yaml\--location=${REGION}

TCP server configuration

For demonstration purposes you will create a regional Backend Service withauto-scaled VMs (usingManaged instance groups - MIG)that will serve "hello world" using the gRPC protocol on port 10000.

  1. Create the Compute Engine VM instance template with a test service on port10000 using netcat utility:

    gcloudcomputeinstance-templatescreatetcp-td-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=allow-health-checks\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script="#! /bin/bashsudo apt-get update -ysudo apt-get install netcat -ywhile true;  do echo 'Hello from TCP service' | nc -l -s 0.0.0.0 -p 10000;done &"
  2. Create a MIG based on the template:

    gcloudcomputeinstance-groupsmanagedcreatetcp-td-mig-us-central1\--zone=${ZONE}\--size=1\--template=tcp-td-vm-template
  3. Set the named ports on the created managed instance group toport 10000:

    gcloudcomputeinstance-groupsset-named-portstcp-td-mig-us-central1\--zone=${ZONE}--named-ports=tcp:10000
  4. Create a regional health check:

    gcloudcomputehealth-checkscreatetcptcp-helloworld-health-check--port10000--region=${REGION}
  5. Create a firewall rule to allow incoming health check connections toinstances in your network:

    gcloudcomputefirewall-rulescreatetcp-vm-allow-health-checks\--networkdefault--actionallow--directionINGRESS\--source-ranges35.191.0.0/16,209.85.152.0/22,209.85.204.0/22\--target-tagsallow-health-checks\--rulestcp:10000
  6. Create a regional Backend Service with a load balancing scheme ofINTERNAL_SELF_MANAGED and add the health check and a managed or unmanagedinstance group to the backend service.

    gcloudcomputebackend-servicescreatetcp-helloworld-service\--region=${REGION}\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--protocol=TCP\--port-name=tcp\--health-checks="https://www.googleapis.com/compute/v1/projects/${PROJECT}/regions/${REGION}/healthChecks/tcp-helloworld-health-check"
  7. Add the MIG to the BackendService:

    gcloudcomputebackend-servicesadd-backendtcp-helloworld-service\--instance-grouptcp-td-mig-us-central1\--instance-group-zone=${ZONE}\--region=${REGION}

Set up routing with regional TCPRoute

  1. Create TCPRoute specification and store it in the tcp_route.yaml file:

    cat<<EOF >tcp_route.yamlname:helloworld-tcp-routemeshes:-projects/$PROJECT_NUMBER/locations/$REGION/meshes/sidecar-meshrules:-action:destinations:-serviceName:projects/$PROJECT_NUMBER/locations/$REGION/backendServices/tcp-helloworld-servicematches:-address:'10.0.0.1/32'port:'10000'EOF
  2. Create TCPRoute using the tcp_route.yaml specification:

    gcloudnetwork-servicestcp-routesimporthelloworld-tcp-route\--source=tcp_route.yaml\--location=${REGION}

Create TCP client with Envoy sidecar

  1. Create a VM with Envoy connected to Cloud Service Mesh:

    gcloudcomputeinstance-templatescreatetd-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=http-td-tag,http-server,https-server\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script='#! /usr/bin/env bash# Set variablesexport ENVOY_CONTROL_PLANE_REGION="us-central1"export ENVOY_USER="envoy"export ENVOY_USER_UID="1337"export ENVOY_USER_GID="1337"export ENVOY_USER_HOME="/opt/envoy"export ENVOY_CONFIG="${ENVOY_USER_HOME}/config.yaml"export ENVOY_PORT="15001"export ENVOY_ADMIN_PORT="15000"export ENVOY_TRACING_ENABLED="false"export ENVOY_XDS_SERVER_CERT="/etc/ssl/certs/ca-certificates.crt"export ENVOY_ACCESS_LOG="/dev/stdout"export ENVOY_NODE_ID="$(cat /proc/sys/kernel/random/uuid)~$(hostname -i)"export BOOTSTRAP_TEMPLATE="${ENVOY_USER_HOME}/bootstrap_template.yaml"export GCE_METADATA_SERVER="169.254.169.254/32"export INTERCEPTED_CIDRS="*"export GCP_PROJECT_NUMBER=PROJECT_NUMBERexport VPC_NETWORK_NAME=mesh:sidecar-meshexport GCE_ZONE=$(curl -sS -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/zone | cut -d"/" -f4)# Create system user account for Envoy binarysudo groupadd ${ENVOY_USER} \  --gid=${ENVOY_USER_GID} \  --systemsudo adduser ${ENVOY_USER} \  --uid=${ENVOY_USER_UID} \  --gid=${ENVOY_USER_GID} \  --home=${ENVOY_USER_HOME} \  --disabled-login \  --system# Download and extract the Cloud Service Mesh tar.gz filecd ${ENVOY_USER_HOME}sudo curl -sL https://storage.googleapis.com/traffic-director/traffic-director-xdsv3.tar.gz -o traffic-director-xdsv3.tar.gzsudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/bootstrap_template.yaml \  -C bootstrap_template.yaml \  --strip-components 1sudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/iptables.sh \  -C iptables.sh \  --strip-components 1sudo rm traffic-director-xdsv3.tar.gz# Generate Envoy bootstrap configurationcat "${BOOTSTRAP_TEMPLATE}" \  | sed -e "s|ENVOY_NODE_ID|${ENVOY_NODE_ID}|g" \  | sed -e "s|ENVOY_ZONE|${GCE_ZONE}|g" \  | sed -e "s|VPC_NETWORK_NAME|${VPC_NETWORK_NAME}|g" \  | sed -e "s|CONFIG_PROJECT_NUMBER|${GCP_PROJECT_NUMBER}|g" \  | sed -e "s|ENVOY_PORT|${ENVOY_PORT}|g" \  | sed -e "s|ENVOY_ADMIN_PORT|${ENVOY_ADMIN_PORT}|g" \  | sed -e "s|XDS_SERVER_CERT|${ENVOY_XDS_SERVER_CERT}|g" \  | sed -e "s|TRACING_ENABLED|${ENVOY_TRACING_ENABLED}|g" \  | sed -e "s|ACCESSLOG_PATH|${ENVOY_ACCESS_LOG}|g" \  | sed -e "s|BACKEND_INBOUND_PORTS|${BACKEND_INBOUND_PORTS}|g" \  | sed -e "s|trafficdirector.googleapis.com|trafficdirector.${ENVOY_CONTROL_PLANE_REGION}.rep.googleapis.com|g" \  | sudo tee "${ENVOY_CONFIG}"# Install Envoy binarywget -O envoy_key https://apt.envoyproxy.io/signing.keycat envoy_key | sudo gpg --dearmor > $(pwd)/envoy-keyring.gpgecho "deb [arch=$(dpkg --print-architecture) signed-by=$(pwd)/envoy-keyring.gpg] https://apt.envoyproxy.io bullseye main" | sudo tee /etc/apt/sources.list.d/envoy.listsudo apt-get updatesudo apt-get install envoy# Run Envoy as systemd servicesudo systemd-run --uid=${ENVOY_USER_UID} --gid=${ENVOY_USER_GID} \  --working-directory=${ENVOY_USER_HOME} --unit=envoy.service \  bash -c "/usr/bin/envoy --config-path ${ENVOY_CONFIG} | tee"# Configure iptables for traffic interception and redirectionsudo ${ENVOY_USER_HOME}/iptables.sh \  -p "${ENVOY_PORT}" \-u "${ENVOY_USER_UID}" \  -g "${ENVOY_USER_GID}" \  -m "REDIRECT" \  -i "${INTERCEPTED_CIDRS}" \  -x "${GCE_METADATA_SERVER}"'gcloudcomputeinstancescreatetd-vm-client\--zone=${ZONE}\--source-instance-templatetd-vm-template
  2. Login to the created VM:

    gcloudcomputesshtd-vm-client--zone=${ZONE}
  3. Verify connectivity to the created test services:

    curl10.0.0.1:10000--http0.9-v

    You should see a textHello from TCP service returned to you as well as beable to see any text you type returned back to you by the netcat servicerunning on the remote VM.

Regional mesh configuration in the host project

Designate a project as the host project. Any service account with permission tocreate/update/delete meshes in this project can control the routingconfigurations attached to regional meshes in this project.

  1. Define a variable that will be used throughout the example:

    exportREGION="us-central1"

    Optionally, you can replaceus-central1 with a different region youwant to use.

  2. Create mesh specification and store it in the mesh.yaml file:

    cat<<EOF >mesh.yamlname:shared-meshEOF
  3. Define a mesh resource in this project with the required configuration:

    gcloudnetwork-servicesmeshesimportshared-mesh\--source=mesh.yaml\--location=${REGION}

    Note the full URI of this mesh resource. Service owners will need it in thefuture in to attach their routes to this Mesh.

  4. Grant thenetworkservices.meshes.use IAM permission to thismesh and to the cross-project service accounts that should be able to attachtheir services information to this mesh:

    gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_NUMBER--member='HTTP_ROUTE_SERVICE_OWNER_ACCOUNT'--role='roles/compute.networkAdmin'

    Now all service owners that havenetworkservices.meshes.use permissiongranted to them are able to add their routing rules to this mesh.

Route configuration in service projects

Each service owner needs to create regional Backend Service(s) and regionalRoute resources in their project similar toSet up Envoy proxies with HTTP services.The only difference is that each HTTPRoute/GRPCRoute/TCPRoute would have the URIof the host project's mesh resource in themeshes field.

  1. Create a sharedvpc-http-route:

    echo"name: sharedvpc-http-routehostnames:- helloworld-gcemeshes:- /projects/HOST_PROJECT_NUMBER/locations/${REGION}/meshes/shared-meshrules:- action:    destinations:    - serviceName: \"SERVICE_URL\""|\gcloudnetwork-serviceshttp-routesimportsharedvpc-http-route\--source=-\--location=${REGION}

Configuring client services in service projects

While configuring a Cloud Service Mesh client (Envoy proxy or proxyless) thatis located in a service project needs to specify the project number where themesh resource is located and the mesh name in its bootstrap configuration:

TRAFFICDIRECTOR_GCP_PROJECT_NUMBER=HOST_PROJECT_NUMBERTRAFFICDIRECTOR_MESH_NAME=MESH_NAME

Gateway TLS routing

This section demonstrates how to set up an Envoy proxy-based ingress gatewaywith regional Gateway and regional TLSRoute resources.

A regional external passthrough Network Load Balancer directs traffic to Envoyproxies that act as an ingress gateway. The Envoy proxies use TLS passthroughrouting and direct traffic to HTTPS servers running on the backend VM instances.

Define some variables that will be used throughout the example.

exportPROJECT_ID="PROJECT_ID"exportPROJECT_NUMBER="PROJECT_NUMBER"exportREGION="us-central1"exportZONE="us-central1-b"exportNETWORK_NAME="default"

Replace the following:default

  • PROJECT_ID with your project ID.
  • PROJECT_NUMBER with your project number.

Optionally, you can replace the following:

  • us-central1 with a different region you want to use.
  • us-central1-b with a different zone you want to use.
  • default with a different network name you want to use.

Cross-referencing regional mesh and regional route resources in multi-project Shared VPC environment

There are scenarios where service mesh configuration consists of services thatare owned by different projects. For example, in Shared VPC or peered VPCdeployments it is possible for each project owner to define their own set ofservices with a purpose for these services to be available to all other projects.

This configuration is "cross-project" because multiple resources defined indifferent projects are combined together to form a single configuration that canbe served to a proxy or proxyless client.

Note: References between regional resource must exist in the same region.

Configure firewall rules

  1. Configure firewall rules to allow traffic from any source. Edit the commandsfor your ports and source IP address ranges.

    gcloudcomputefirewall-rulescreateallow-gateway-health-checks\--network=${NETWORK_NAME}\--direction=INGRESS\--action=ALLOW\--rules=tcp\--source-ranges="35.191.0.0/16,209.85.152.0/22,209.85.204.0/22"\--target-tags=gateway-proxy

Configure IAM permissions

  1. Create a service account identity for the gateway proxies:

    gcloudiamservice-accountscreategateway-proxy
  2. Assign the required IAM roles to the service account identity:

    gcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:gateway-proxy@${PROJECT_ID}.iam.gserviceaccount.com"\--role="roles/trafficdirector.client"
    gcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:gateway-proxy@${PROJECT_ID}.iam.gserviceaccount.com"\--role="roles/logging.logWriter"

Configure the regional Gateway:

  1. In a file called gateway8443.yaml, create the Gateway specification for HTTPtraffic:

    cat<<EOF >gateway8443.yamlname:gateway8443scope:gateway-proxy-8443ports:-8443type:OPEN_MESHEOF
  2. Create the regional Gateway resource using the gateway8443.yaml specification:

    gcloudnetwork-servicesgatewaysimportgateway8443\--source=gateway8443.yaml\--location=${REGION}

Create a managed instance group with Envoy proxies

In this section you create an instance template for a VM running anautomatically deployed Envoy service proxy. The Envoys have the scope set togateway-proxy. Don't pass the serving port as a parameter of the--service-proxy flag.

  1. Create a managed instance group with Envoy proxies:

    gcloudbetacomputeinstance-templatescreategateway-proxy\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=gateway-proxy,http-td-tag,http-server,https-server\--image-family=debian-11\--image-project=debian-cloud\--network-interface=network=${NETWORK_NAME}\--service-account="gateway-proxy@${PROJECT_ID}.iam.gserviceaccount.com"\--metadata=startup-script='#! /usr/bin/env bash# Set variablesexport ENVOY_CONTROL_PLANE_REGION="us-central1"export GCP_PROJECT_NUMBER=PROJECT_NUMBERexport VPC_NETWORK_NAME=scope:gateway-proxy-8443export ENVOY_USER="envoy"export ENVOY_USER_UID="1337"export ENVOY_USER_GID="1337"export ENVOY_USER_HOME="/opt/envoy"export ENVOY_CONFIG="${ENVOY_USER_HOME}/config.yaml"export ENVOY_PORT="15001"export ENVOY_ADMIN_PORT="15000"export ENVOY_TRACING_ENABLED="false"export ENVOY_XDS_SERVER_CERT="/etc/ssl/certs/ca-certificates.crt"export ENVOY_ACCESS_LOG="/dev/stdout"export ENVOY_NODE_ID="$(cat /proc/sys/kernel/random/uuid)~$(hostname -i)"export BOOTSTRAP_TEMPLATE="${ENVOY_USER_HOME}/bootstrap_template.yaml"export GCE_METADATA_SERVER="169.254.169.254/32"export INTERCEPTED_CIDRS="*"export GCE_ZONE=$(curl -sS -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/zone | cut -d"/" -f4)# Create system user account for Envoy binarysudo groupadd ${ENVOY_USER} \  --gid=${ENVOY_USER_GID} \  --systemsudo adduser ${ENVOY_USER} \  --uid=${ENVOY_USER_UID} \  --gid=${ENVOY_USER_GID} \  --home=${ENVOY_USER_HOME} \  --disabled-login \  --system# Download and extract the Cloud Service Mesh tar.gz filecd ${ENVOY_USER_HOME}sudo curl -sL https://storage.googleapis.com/traffic-director/traffic-director-xdsv3.tar.gz -o traffic-director-xdsv3.tar.gzsudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/bootstrap_template.yaml \  -C bootstrap_template.yaml \  --strip-components 1sudo tar -xvzf traffic-director-xdsv3.tar.gz traffic-director-xdsv3/iptables.sh \  -C iptables.sh \  --strip-components 1sudo rm traffic-director-xdsv3.tar.gz# Generate Envoy bootstrap configurationcat "${BOOTSTRAP_TEMPLATE}" \  | sed -e "s|ENVOY_NODE_ID|${ENVOY_NODE_ID}|g" \  | sed -e "s|ENVOY_ZONE|${GCE_ZONE}|g" \  | sed -e "s|VPC_NETWORK_NAME|${VPC_NETWORK_NAME}|g" \  | sed -e "s|CONFIG_PROJECT_NUMBER|${GCP_PROJECT_NUMBER}|g" \  | sed -e "s|ENVOY_PORT|${ENVOY_PORT}|g" \  | sed -e "s|ENVOY_ADMIN_PORT|${ENVOY_ADMIN_PORT}|g" \  | sed -e "s|XDS_SERVER_CERT|${ENVOY_XDS_SERVER_CERT}|g" \  | sed -e "s|TRACING_ENABLED|${ENVOY_TRACING_ENABLED}|g" \  | sed -e "s|ACCESSLOG_PATH|${ENVOY_ACCESS_LOG}|g" \  | sed -e "s|BACKEND_INBOUND_PORTS|${BACKEND_INBOUND_PORTS}|g" \  | sed -e "s|trafficdirector.googleapis.com|trafficdirector.${ENVOY_CONTROL_PLANE_REGION}.rep.googleapis.com|g" \  | sudo tee "${ENVOY_CONFIG}"# Install Envoy binarywget -O envoy_key https://apt.envoyproxy.io/signing.keycat envoy_key | sudo gpg --dearmor > $(pwd)/envoy-keyring.gpgecho "deb [arch=$(dpkg --print-architecture) signed-by=$(pwd)/envoy-keyring.gpg] https://apt.envoyproxy.io bullseye main" | sudo tee /etc/apt/sources.list.d/envoy.listsudo apt-get updatesudo apt-get install envoy# Run Envoy as systemd servicesudo systemd-run --uid=${ENVOY_USER_UID} --gid=${ENVOY_USER_GID} \  --working-directory=${ENVOY_USER_HOME} --unit=envoy.service \  bash -c "/usr/bin/envoy --config-path ${ENVOY_CONFIG} | tee"# Configure iptables for traffic interception and redirectionsudo ${ENVOY_USER_HOME}/iptables.sh \  -p "${ENVOY_PORT}" \-u "${ENVOY_USER_UID}" \  -g "${ENVOY_USER_GID}" \  -m "REDIRECT" \  -i "${INTERCEPTED_CIDRS}" \  -x "${GCE_METADATA_SERVER}"'
  2. Create a regional managed instance group from the instance template:

    gcloudcomputeinstance-groupsmanagedcreategateway-proxy\--region=${REGION}\--size=1\--template=gateway-proxy
  3. Set the serving port name for the managed instance group:

    gcloudcomputeinstance-groupsmanagedset-named-portsgateway-proxy\--named-ports=https:8443\--region=${REGION}

Set up the regional external passthrough network load balancer

  1. Create a static external regional IP address:

    gcloudcomputeaddressescreatexnlb-${REGION}\--region=${REGION}
  2. Obtain the IP address that is reserved for the external load balancer:

    gcloudcomputeaddressesdescribexnlb-${REGION}\--region=${REGION}--format='value(address)'
  3. Create a health check for the gateway proxies:

    gcloudcomputehealth-checkscreatetcpxnlb-${REGION}\--region=${REGION}\--use-serving-port
  4. Create a backend service for the gateway proxies:

    gcloudcomputebackend-servicescreatexnlb-${REGION}\--health-checks=xnlb-${REGION}\--health-checks-region=${REGION}\--load-balancing-scheme=EXTERNAL\--protocol=TCP\--region=${REGION}\--port-name=https
  5. Add the managed instance group as a backend:

    gcloudcomputebackend-servicesadd-backendxnlb-${REGION}\--instance-group=gateway-proxy\--instance-group-region=${REGION}\--region=${REGION}
  6. Create a forwarding rule to route traffic to the gateway proxies:

    gcloudcomputeforwarding-rulescreatexnlb-${REGION}\--region=${REGION}\--load-balancing-scheme=EXTERNAL\--address=${IP_ADDRESS}\--ip-protocol=TCP\--ports=8443\--backend-service=xnlb-${REGION}\--backend-service-region=${REGION}

Configure a managed instance group running an HTTPS service

  1. Create an instance template with an HTTPS service that is exposed on port8443:

    gcloudcomputeinstance-templatescreatetd-https-vm-template\--scopes=https://www.googleapis.com/auth/cloud-platform\--tags=https-td-server\--image-family=debian-11\--image-project=debian-cloud\--metadata=startup-script='#! /bin/bashsudo rm -rf /var/lib/apt/lists/*sudo apt-get -y cleansudo apt-get -y updatesudo apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-commonsudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"sudo apt-get -y updatesudo apt-get -y install docker-cesudo which dockerecho "{ \"registry-mirrors\": [\"https://mirror.gcr.io\"] }" | sudo tee -a /etc/docker/daemon.jsonsudo service docker restartsudo docker run -e HTTPS_PORT=9999 -p 8443:9999 --rm -dt mendhak/http-https-echo:22'
  2. Create a managed instance group based on the instance template:

    gcloudcomputeinstance-groupsmanagedcreatehttps-td-mig-us-${REGION}\--zone=${ZONE}\--size=2\--template=td-https-vm-template
  3. Set the name of the serving port for the managed instance group:

    gcloudcomputeinstance-groupsmanagedset-named-portshttps-td-mig-us-${REGION}\--named-ports=https:8443\--zone=${ZONE}
  4. Create a health check:

    gcloudcomputehealth-checkscreatehttpshttps-helloworld-health-check\--port=8443--region=${REGION}
  5. Create a firewall rule to allow incoming health check connections toinstances in your network:

    gcloudcomputefirewall-rulescreatehttps-vm-allow-health-checks\--network${NETWORK_NAME}--actionallow--directionINGRESS\--source-ranges35.191.0.0/16,130.211.0.0/22\--target-tagshttps-td-server\--rulestcp:8443
  6. Create a regional backend service with a load balancing scheme ofINTERNAL_SELF_MANAGED and add the health check:

    gcloudcomputebackend-servicescreatehttps-helloworld-service\--region=${REGION}\--load-balancing-scheme=INTERNAL_SELF_MANAGED\--port-name=https\--health-checks="https://www.googleapis.com/compute/v1/projects/${PROJECT}/regions/${REGION}/healthChecks/https-helloworld-health-check"
  7. Add the managed instance group as a backend to the backend service:

    gcloudcomputebackend-servicesadd-backendhttps-helloworld-service\--instance-group=https-td-mig-us-${REGION}\--instance-group-zone=${ZONE}\--region=${REGION}

Set up routing with a TLSRoute resource

  1. In a file called tls_route.yaml, create the TLSRoute specification:

    cat<<EOF >tls_route.yamlname:helloworld-tls-routegateways:-projects/${PROJECT_NUMBER}/locations/${REGION}/gateways/gateway8443rules:-matches:-sniHost:-example.comalpn:-h2action:destinations:-serviceName:projects/${PROJECT_NUMBER}/locations/${REGION}/backendServices/https-helloworld-serviceEOF

    In the previous instruction, TLSRoute matches example.com as SNI and h2 asALPN. If the matches are changed as follows, TLSRoute matches SNI or ALPN:

    - matches:  - sniHost:    - example.com  - alpn:    - h2
  2. Use the tls_route.yaml specification to create the TLSRoute resource:

    gcloudnetwork-servicestls-routesimporthelloworld-tls-route\--source=tls_route.yaml\--location=${REGION}

Validate the deployment

  1. Run the following curl command to verify HTTP connectivity to the testservices that you created:

    curlhttps://example.com:8443--resolveexample.com:8443:${IP_ADDRESS}-k
  2. The command returns a response from one of the VMs in the managed instancegroup. The output is similar to the following:

    {  "path": "/",  "headers": {    "host": "example.com:8443",    "user-agent": "curl/8.16.0",    "accept": "*/*"  },  "method": "GET",  "body": "",  "fresh": false,  "hostname": "example.com",  "ip": "::ffff:10.128.0.59",  "ips": [],  "protocol": "https",  "query": {},  "subdomains": [],  "xhr": false,  "os": {    "hostname": "19cd7812e792"  },  "connection": {    "servername": "example.com"  }

Verify with a negative verification

  1. In the following command, the SNI does not match example.com, so the Gatewayrejects the connection:

    curlhttps://invalid-server.com:8443--resolveinvalid-server.com:8443:${IP_ADDRESS}-k
  2. In the following command, the ALPN does not match h2 (HTTP2 protocol), so theGateway rejects the connection:

    curlhttps://example.com:8443--resolveexample.com:8443:${IP_ADDRESS}-k--http1.1

    The previous commands all return the following error:

    curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection.
  3. In the following command, the client is creating a plain text (unencrypted)connection, so the Gateway rejects the connection with a 404 Not Found error:

    curlexample.com:8443--resolveexample.com:8443:${IP_ADDRESS}-k

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.