Using Cloud Service Mesh egress gateways on GKE clusters: Tutorial

This tutorial shows how to useCloud Service Meshegress gateways and other Google Cloud controls to secure outbound traffic(egress) from workloads deployed on a Google Kubernetes Engine cluster. The tutorialis intended as a companion to theBest Practices for using Cloud Service Mesh egress gateways on GKE clusters.

The intended audience for this tutorial includes network, platform, andsecurity engineers who administer Google Kubernetes Engine clusters used by one ormore software delivery teams. The controls described here are especially usefulfor organizations that must demonstrate compliance with regulations—for example,GDPR andPCI.

Objectives

  • Set up the infrastructure for running Cloud Service Mesh:
  • Install Cloud Service Mesh.
  • Install egress gateway proxies running on a dedicated node pool.
  • Configure multi-tenant routing rules for external traffic through theegress gateway:
    • Applications in namespaceteam-x can connect toexample.com
    • Applications in namespaceteam-y can connect tohttpbin.org
  • Use theSidecar resource to restrict the scope of the sidecar proxyegress configuration for each namespace.
  • Configure authorization policies to enforce egress rules.
  • Configure the egress gateway to upgrade plain HTTP requests to TLS (TLSorigination).
  • Configure the egress gateway to pass-through TLS traffic.
  • Set up Kubernetes network policies as an additional egress control.
  • Configure direct access to Google APIs usingPrivate Google Access and Identity and Access Management (IAM)permissions.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use thepricing calculator.

New Google Cloud users might be eligible for afree trial.

When you finish this tutorial, you can avoid ongoing costs by deleting theresources you created. For more information, seeCleaning up.

Before you begin

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  2. Verify that billing is enabled for your Google Cloud project.

  3. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

  4. Create a working directory to use while following the tutorial:

    mkdir-p~/WORKING_DIRECTORYcd~/WORKING_DIRECTORY
  5. Create a shell script to initialize your environment for the tutorial.Replace and edit the variables according to your project and preferences.Run this script with thesource command to reinitialize your environmentif your shell session expires:

    cat <<'EOF' >./init-egress-tutorial.sh#! /usr/bin/env bashPROJECT_ID=YOUR_PROJECT_IDREGION=REGIONZONE=ZONEgcloudconfigsetproject${PROJECT_ID}gcloudconfigsetcompute/region${REGION}gcloudconfigsetcompute/zone${ZONE}EOF
  6. Enablecompute.googleapis.com:

    gcloudservicesenablecompute.googleapis.com--project=YOUR_PROJECT_ID
  7. Make the script executable and run it with thesource command toinitialize your environment. SelectY if prompted to enablecompute.googleapis.com:

    chmod+x./init-egress-tutorial.shsource./init-egress-tutorial.sh

Setting up the infrastructure

Create a VPC network and subnet

  1. Create a new VPC network:

    gcloudcomputenetworkscreatevpc-network\--subnet-modecustom
  2. Create a subnet for the cluster to run in with pre-assigned secondaryIP address ranges for Pods and services. Private Google Access isenabled so that applications with only internal IP addresses can reachGoogle APIs and services:

    gcloudcomputenetworkssubnetscreatesubnet-gke\--networkvpc-network\--range10.0.0.0/24\--secondary-rangepods=10.1.0.0/16,services=10.2.0.0/20\--enable-private-ip-google-access

Configure Cloud NAT

Cloud NAT allows workloads without external IP addresses to connect todestinations on the internet and receive inbound responses from thosedestinations.

Note: Cloud Router and Cloud NAT are used only to configure NATfor external internet connectivity; however, they are not actually in the pathof network traffic. NAT configuration is applied at the software definednetworking layer.
  1. Create a Cloud Router:

    gcloudcomputerouterscreatenat-router\--networkvpc-network
  2. Add a NAT configuration to the router:

    gcloudcomputeroutersnatscreatenat-config\--routernat-router\--nat-all-subnet-ip-ranges\--auto-allocate-nat-external-ips

Create service accounts for each GKE node pool

Create two service accounts for use by the two GKE nodepools. A separate service account is assigned to each node pool so that you canapply VPC firewall rules to specific nodes.

  1. Create a service account for use by the nodes in the default node pool:

    gcloudiamservice-accountscreatesa-application-nodes\--description="SA for application nodes"\--display-name="sa-application-nodes"
  2. Create a service account for use by the nodes in the gateway node pool:

    gcloudiamservice-accountscreatesa-gateway-nodes\--description="SA for gateway nodes"\--display-name="sa-gateway-nodes"

Grant permissions to the service accounts

Add a minimal set of IAM roles to the application and gatewayservice accounts. These roles are required for logging, monitoring, andpulling private container images from Container Registry.

project_roles=(roles/logging.logWriterroles/monitoring.metricWriterroles/monitoring.viewerroles/storage.objectViewer)forrolein"${project_roles[@]}"dogcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:sa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--role="$role"gcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--role="$role"done
Note: If workloads on your GKE cluster need access toother Google Cloud services, create workload-specific service accountswith the least privileges possible.Workload Identity Federation for GKE allowsIAM permissions to be assigned to specific workloads. Later, thistutorial explains how to configure IAM access so that aparticular workload can read a file in a Cloud Storage bucket.

Creating the firewall rules

In the following steps, you apply a firewall rule to the VPC network so that,by default, all egress traffic is denied. Specific connectivity is required forthe cluster to function and for gateway nodes to be able to reach destinationsoutside of the VPC. A minimal set of specific firewall rules overrides thedefault deny-all rule to allow the necessary connectivity.

  1. Create a default (low priority) firewall rule to deny all egress fromthe VPC network:

    gcloudcomputefirewall-rulescreateglobal-deny-egress-all\--actionDENY\--directionEGRESS\--rulesall\--destination-ranges0.0.0.0/0\--networkvpc-network\--priority65535\--description"Default rule to deny all egress from the network."
  2. Create a rule to allow only those nodes with the gateway serviceaccount to reach the internet:

    gcloudcomputefirewall-rulescreategateway-allow-egress-web\--actionALLOW\--directionEGRESS\--rulestcp:80,tcp:443\--target-service-accountssa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--networkvpc-network\--priority1000\--description"Allow the nodes running the egress gateways to connect to the web"
  3. Allow nodes to the reach the Kubernetes control plane:

    gcloudcomputefirewall-rulescreateallow-egress-to-api-server\--actionALLOW\--directionEGRESS\--rulestcp:443,tcp:10250\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges10.5.0.0/28\--networkvpc-network\--priority1000\--description"Allow nodes to reach the Kubernetes API server."
  4. Optional: This firewall rule is not needed if you use Managed Cloud Service Mesh.

    Cloud Service Mesh uses webhooks when injecting sidecar proxies intoworkloads. Allow the GKE API server to call the webhooksexposed by the service mesh control plane running on the nodes:

    gcloudcomputefirewall-rulescreateallow-ingress-api-server-to-webhook\--actionALLOW\--directionINGRESS\--rulestcp:15017\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--source-ranges10.5.0.0/28\--networkvpc-network\--priority1000\--description"Allow the API server to call the webhooks exposed by istiod discovery"
  5. Allow egress connectivity between Nodes and Pods running on thecluster. GKE automatically creates acorresponding ingress rule. No rule is required for Service connectivitybecause the iptables routing chain always converts Service IP addresses toPod IP addresses.

    gcloudcomputefirewall-rulescreateallow-egress-nodes-and-pods\--actionALLOW\--directionEGRESS\--rulesall\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges10.0.0.0/24,10.1.0.0/16\--networkvpc-network\--priority1000\--description"Allow egress to other Nodes and Pods"
  6. Allow access to the reserved sets of IP addresses used byPrivate Google Access for serving Google APIs,Container Registry, and other services:

    gcloudcomputefirewall-rulescreateallow-egress-gcp-apis\--actionALLOW\--directionEGRESS\--rulestcp\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges199.36.153.8/30\--networkvpc-network\--priority1000\--description"Allow access to the VIPs used by Google Cloud APIs (Private Google Access)"

    If you are usingVPC Service Controls, use199.36.153.8/30 instead. For more information, see the following section..

    As an alternative to using the reserved set of internal IP addresses, youcan expose aPrivate Service Connect endpoint with an internal IPaddress of your choice.

  7. Allow the Google Cloud health checker service to access podsrunning in the cluster. Seehealth checks:for more information.

    gcloudcomputefirewall-rulescreateallow-ingress-gcp-health-checker\--actionALLOW\--directionINGRESS\--rulestcp:80,tcp:443\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--source-ranges35.191.0.0/16,130.211.0.0/22,209.85.152.0/22,209.85.204.0/22\--networkvpc-network\--priority1000\--description"Allow workloads to respond to Google Cloud health checks"

Configuring private access to Google Cloud APIs

Private Google Access enables VMs and Pods that only have internal IPaddresses to access Google APIs and services. Although Google APIs and servicesare served from external IPs, traffic from the nodes never leaves the Googlenetwork when using Private Google Access.

Private Google Access providesdifferent options and VIPs for connecting to Google APIs and services. Thistutorial uses private.googleapis.com and its corresponding VIP. If you useVPC Service Controls and want to blockaccess to APIs that do not support VPC Service Controls, userestricted.googleapis.com and the 199.36.153.8/30 VIP.

As an alternative to using the reserved set of internal IP addresses, youcan expose APIs usingPrivate Service Connectendpoints with IP addresses of your choice.

Enable the Cloud DNS API:

gcloudservicesenabledns.googleapis.com

Create a private DNS zone, aCNAME, andA records so that nodes andworkloads can connect to Google APIs and services usingPrivate Google Access and theprivate.googleapis.com hostname:

gclouddnsmanaged-zonescreateprivate-google-apis\--description"Private DNS zone for Google APIs"\--dns-namegoogleapis.com\--visibilityprivate\--networksvpc-networkgclouddnsrecord-setstransactionstart--zoneprivate-google-apisgclouddnsrecord-setstransactionaddprivate.googleapis.com.\--name"*.googleapis.com"\--ttl300\--typeCNAME\--zoneprivate-google-apisgclouddnsrecord-setstransactionadd"199.36.153.8"\"199.36.153.9""199.36.153.10""199.36.153.11"\--nameprivate.googleapis.com\--ttl300\--typeA\--zoneprivate-google-apisgclouddnsrecord-setstransactionexecute--zoneprivate-google-apis

Configuring private access to Container Registry

Create a private DNS zone, aCNAME and anA record so that nodes canconnect to Container Registry using Private Google Access and thegcr.io hostname:

gclouddnsmanaged-zonescreateprivate-gcr-io\--description"private zone for Container Registry"\--dns-namegcr.io\--visibilityprivate\--networksvpc-networkgclouddnsrecord-setstransactionstart--zoneprivate-gcr-iogclouddnsrecord-setstransactionaddgcr.io.\--name"*.gcr.io"\--ttl300\--typeCNAME\--zoneprivate-gcr-iogclouddnsrecord-setstransactionadd"199.36.153.8""199.36.153.9""199.36.153.10""199.36.153.11"\--namegcr.io\--ttl300\--typeA\--zoneprivate-gcr-iogclouddnsrecord-setstransactionexecute--zoneprivate-gcr-io

Create a private GKE cluster

  1. Find the external IP address of your Cloud Shell so that youcan add it to the list of networks that are allowed to access yourcluster's API server:

    SHELL_IP=$(digTXT-4+short@ns1.google.como-o.myaddr.l.google.com)

    After a period of inactivity, the external IP address of yourCloud Shell VM can change. If that happens, you must update yourcluster's list of authorized networks. Add the following command to yourinitialization script:

    cat <<'EOF' >>./init-egress-tutorial.shSHELL_IP=$(digTXT-4+short@ns1.google.como-o.myaddr.l.google.com)gcloudcontainerclustersupdatecluster1\--enable-master-authorized-networks\--master-authorized-networks${SHELL_IP//\"}/32EOF
  2. Enable the Google Kubernetes Engine API:

    gcloudservicesenablecontainer.googleapis.com
  3. Create a private GKE cluster:

    gcloudcontainerclusterscreatecluster1\--enable-ip-alias\--enable-private-nodes\--release-channel"regular"\--enable-master-authorized-networks\--master-authorized-networks${SHELL_IP//\"}/32\--master-ipv4-cidr10.5.0.0/28\--enable-dataplane-v2\--service-account"sa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--machine-type"e2-standard-4"\--network"vpc-network"\--subnetwork"subnet-gke"\--cluster-secondary-range-name"pods"\--services-secondary-range-name"services"\--workload-pool"${PROJECT_ID}.svc.id.goog"\--zone${ZONE}

    It takes a few minutes for the cluster to be created. The cluster hasprivate nodes with internal IP addresses. Pods and services are assignedIPs from the named secondary ranges that you defined when creating the VPCsubnet.

    Cloud Service Mesh with an in-cluster control plane requires the clusternodes to use a machine type that has at least 4 vCPUs.

    Google recommends that the cluster be subscribed to the "regular" releasechannel to ensure that nodes are running a Kubernetes version that issupported by Cloud Service Mesh.

    For more information on the prerequisites for running Cloud Service Meshwith an in-cluster control plane, see thein-cluster prerequisites.

    For more information on the requirements and limitations for runningmanaged Cloud Service Mesh see themanaged Cloud Service Mesh supported features.

    Workload Identity Federation for GKE isenabled on the cluster. Cloud Service Mesh requires Workload Identity Federation for GKE andis the recommended way to access Google APIs from GKEworkloads.

  4. Create a node pool calledgateway. This node pool is where the egressgateway is deployed. Thededicated=gateway:NoScheduletaintis added to every node in the gateway node pool.

    gcloudcontainernode-poolscreate"gateway"\--cluster"cluster1"\--machine-type"e2-standard-4"\--node-taintsdedicated=gateway:NoSchedule\--service-account"sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--num-nodes"1"

    Kubernetestaints and tolerationshelp ensure that only egress gateway Pods run on nodes in the gateway nodepool.

  5. Download credentials so that you can connect to the cluster with kubectl:

    gcloudcontainerclustersget-credentialscluster1
  6. Verify that the gateway nodes have the correct taint:

    kubectlgetnodes-lcloud.google.com/gke-nodepool=gateway-oyaml\-o=custom-columns='name:metadata.name,taints:spec.taints[?(@.key=="dedicated")]'

    The output is similar to the following:

    name                                 taintsgke-cluster1-gateway-9d65b410-cffs   map[effect:NoSchedule key:dedicated value:gateway]

Installing and setting up Cloud Service Mesh

Follow one of the installation guides for Cloud Service Mesh:

Once you have installed Cloud Service Mesh, stop and return to this tutorialwithout installing ingress or egress gateways.

Install an egress gateway

  1. Create a Kubernetes namespace for the egress gateway:

    kubectlcreatenamespaceistio-egress
  2. Enable the namespace for injection. The steps depend on yourcontrol plane implementation.

    Managed (TD)

    Apply the default injection label to the namespace:

    kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwrite

    Managed (Istiod)

    Recommended: Run the following command to apply the default injection label to the namespace:

    kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwrite

    If you are an existing user with the Managed Istiod control plane:We recommend that you use default injection, but revision-based injection issupported. Use the following instructions:

    1. Run the following command to locate the available release channels:

      kubectl-nistio-systemgetcontrolplanerevision

      The output is similar to the following:

      NAME                AGEasm-managed-rapid   6d7h
      Note: If two control plane revisions appear in the list above, remove one. Having multiple control plane channels in the cluster is not supported.

      In the output, the value under theNAME column is the revision label that corresponds to the availablerelease channel for the Cloud Service Mesh version.

    2. Apply the revision label to the namespace:

      kubectllabelnamespaceistio-egress\istio-injection-istio.io/rev=REVISION_LABEL--overwrite

    In-cluster

    Recommended: Run the following command to apply the default injection label to the namespace:

    kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwrite

    We recommend that you use default injection, but revision-based injection is supported:Use the following instructions:

    1. Use the following command to locate the revision label onistiod:

      kubectlgetdeploy-nistio-system-lapp=istiod-o\jsonpath={.items[*].metadata.labels.'istio\.io\/rev'}'{"\n"}'
    2. Apply the revision label to the namespace. In the following command,REVISION_LABEL is the value of theistiod revisionlabel that you noted in the previous step.

      kubectllabelnamespaceistio-egress\istio-injection-istio.io/rev=REVISION_LABEL--overwrite
  3. Create anoperator manifest for the egress gateway:

    cat <<EOF >egressgateway-operator.yamlapiVersion:install.istio.io/v1alpha1kind:IstioOperatormetadata:name:egressgateway-operatorannotations:config.kubernetes.io/local-config:"true"spec:profile:emptyrevision:REVISIONcomponents:egressGateways:-name:istio-egressgatewaynamespace:istio-egressenabled:truevalues:gateways:istio-egressgateway:injectionTemplate:gatewaytolerations:-key:"dedicated"operator:"Equal"value:"gateway"nodeSelector:cloud.google.com/gke-nodepool:"gateway"EOF
  4. Download theistioctl tool. You must use version 1.16.2-asm.2 or newer evenif you are using Cloud Service Mesh version 1.15 or lower. SeeDownloading the correct istioctl version.

  5. After extracting the downloaded archive, set an environment variable to holdthe path to theistioctl tool and add it to your initialization script:

    ISTIOCTL=$(find"$(pwd-P)"-nameistioctl)echo"ISTIOCTL=\"${ISTIOCTL}\"" >>./init-egress-tutorial.sh
  6. Create the egress gateway installation manifest using the operatormanifest andistioctl:

    ${ISTIOCTL}manifestgenerate\--filenameegressgateway-operator.yaml\--outputegressgateway\--cluster-specific

    You can view the generated manifest at'egressgateway/Base/Pilot/EgressGateways/EgressGateways.yaml'.

    When deployed it creates standard Kubernetes resources such asDeployment, Service, ServiceAccount, Role, RoleBinding,HorizontalPodAutoscaler, and PodDisruptionBudget. Using istioctl and theoperator manifest are a convenient way to generate the deployment manifest.For your own production mesh you can generate and manage the deploymentmanifest using your preferred tools.

  7. Install the egress gateway:

    kubectlapply--recursive--filenameegressgateway/
  8. Check that the egress gateway is running on nodes in thegateway nodepool:

    kubectlgetpods-nistio-egress-owide
  9. The egress gateway pods haveaffinity for nodes in thegateway node pooland a toleration that lets them run on the tainted gateway nodes. Examinethe node affinity and tolerations for the egress gateway pods:

    kubectl-nistio-egressgetpod-listio=egressgateway\-o=custom-columns='name:metadata.name,node-affinity:spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms,tolerations:spec.tolerations[?(@.key=="dedicated")]'

    The output is similar to the following:

    name                                   node-affinity                                                                                   tolerationsistio-egressgateway-754d9684d5-jjkdz   [map[matchExpressions:[map[key:cloud.google.com/gke-nodepool operator:In values:[gateway]]]]]   map[key:dedicated operator:Equal value:gateway]
Caution

The operator manifest for the egress gateway specifies a toleration and a nodeSelector so that the deployed gateway will only run on gateway nodes.

Make sure that only network administrators can apply the gateway toleration to Pods. Unauthorized use of the gateway toleration allows deployed pods to impersonate the gateway and run on gateway nodes. Pods running on gateway nodes can connect directly to external hosts. Restrict use of the gateway toleration as part of deployment pipelines or by using Kubernetes admission control. AnGKE Enterprise Policy Controller constraint can be used in both scenarios.

Enable Envoy access logging

The steps required to enable Envoy access logs depend on your Cloud Service Meshtype, either managed or in-cluster:

Managed

Follow the instructions toenable access logs in managed Cloud Service Mesh.

In-cluster

Follow the instructions toenable access logs in in-cluster Cloud Service Mesh.

Preparing the mesh and a test application

  1. Make sure that STRICT mutual TLS is enabled. Apply a defaultPeerAuthentication policy for the mesh in theistio-system namespace:

    cat <<EOF | kubectl apply -f -apiVersion:"security.istio.io/v1beta1"kind:"PeerAuthentication"metadata:name:"default"namespace:"istio-system"spec:mtls:mode:STRICTEOF

    You can override this configuration by creatingPeerAuthenticationresources in specific namespaces.

  2. Create namespaces to use for deploying test workloads. Later steps inthis tutorial explain how to configure different egress routing rules foreach namespace.

    kubectlcreatenamespaceteam-xkubectlcreatenamespaceteam-y
  3. Label the namespaces so that they can be selected by Kubernetes networkpolicies:

    kubectllabelnamespaceteam-xteam=xkubectllabelnamespaceteam-yteam=y
  4. For Cloud Service Mesh to automatically inject proxy sidecars, youset the control plane revision label on the workload namespaces:

    kubectllabelnsteam-xistio.io/rev-istio-injection=enabled--overwritekubectllabelnsteam-yistio.io/rev-istio-injection=enabled--overwrite
  5. Create a YAML file to use for making test deployments:

    cat << 'EOF' > ./test.yamlapiVersion:v1kind:ServiceAccountmetadata:name:test---apiVersion:v1kind:Servicemetadata:name:testlabels:app:testspec:ports:-port:80name:httpselector:app:test---apiVersion:apps/v1kind:Deploymentmetadata:name:testspec:replicas:1selector:matchLabels:app:testtemplate:metadata:labels:app:testspec:serviceAccountName:testcontainers:-name:testimage:gcr.io/google.com/cloudsdktool/cloud-sdk:slimcommand:["/bin/sleep","infinity"]imagePullPolicy:IfNotPresentEOF
  6. Deploy the test application to theteam-x namespace:

    kubectl-nteam-xcreate-f./test.yaml
  7. Verify that the test application is deployed to a node in the defaultpool and that a proxy sidecar container is injected. Repeat the followingcommand until the pod's status isRunning:

    kubectl -n team-x get po -l app=test -o wide

    The output is similar to the following:

    NAME                   READY   STATUS    RESTARTS   AGE   IP          NODE                                      NOMINATED NODE   READINESS GATEStest-d5bdf6f4f-9nxfv   2/2     Running   0          19h   10.1.1.25   gke-cluster1-default-pool-f6c7a51f-wbzj

    2 out of 2 containers areRunning. One container is the testapplication and the other is the proxy sidecar.

    The Pod is running on a node in the default node pool.

  8. Verify that it is not possible to make an HTTP request from the testcontainer to an external site:

    kubectl-nteam-xexec-it\$(kubectl-nteam-xgetpod-lapp=test-ojsonpath={.items..metadata.name})\-ctest--curl-vhttp://example.com

    An error message from the sidecar proxy is generated because theglobal-deny-egress-all firewall rule denies the upstream connection.

Using the Sidecar resource to restrict the scope of sidecar proxy configuration

You can use theSidecar resource to restrict the scope of the egress listener that is configured for sidecarproxies. To reduce configuration bloat and memory usage, it's a good practice toapply a defaultSidecar resource for every namespace.

The proxy that Cloud Service Mesh runs in the sidecar is Envoy. InEnvoy terminology,acluster is a logically similar group of upstream endpoints used asdestinations for load balancing.

  1. Inspect the outbound clusters configured in the Envoy sidecar proxy forthe test pod by running theistioctl proxy-config command:

    ${ISTIOCTL}pcc$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name}).team-x--directionoutbound

    There are approximately 11 Envoy clusters in the list, including somefor the egress gateway.

  2. Restrict the proxy configuration to egress routes that have beenexplicitly defined withservice entries in the egress andteam-x namespaces. Apply aSidecar resource to theteam-x namespace:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Sidecarmetadata:name:defaultnamespace:team-xspec:outboundTrafficPolicy:mode:REGISTRY_ONLYegress:-hosts:-'istio-egress/*'-'team-x/*'EOF

    Setting outbound traffic policy mode toREGISTRY_ONLY restricts theproxy configuration to include only those external hosts that have beenexplicitly added to the mesh's service registry by defining service entries.

    Settingegress.hosts specifies that the sidecar proxy only selectsroutes from the egress namespace that are made available by usingtheexportTo attribute. The 'team-x/*' part includes any routes thathave been configured locally in theteam-x namespace.

  3. View the outbound clusters configured in the Envoy sidecar proxy, andcompare them to the list of clusters that were configured before applyingtheSidecar resource:

    ${ISTIOCTL}pcc$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name}).team-x--directionoutbound

    You see clusters for the egress gateway and one for the test pod itself.

Configuring Cloud Service Mesh to route traffic through the egress gateway

  1. Configure aGateway for HTTP traffic on port 80. TheGatewayselects the egress gateway proxy that you deployed to theegress namespace. TheGateway configuration is applied to the egressnamespace and handles traffic for any host.

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:egress-gatewaynamespace:istio-egressspec:selector:istio:egressgatewayservers:-port:number:80name:httpsprotocol:HTTPShosts:-'*'tls:mode:ISTIO_MUTUALEOF
  2. Create aDestinationRule for the egress gateway with mutual TLS forauthentication and encryption. Use a single shared destination rule for allexternal hosts.

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:target-egress-gatewaynamespace:istio-egressspec:host:istio-egressgateway.istio-egress.svc.cluster.localsubsets:-name:target-egress-gateway-mTLStrafficPolicy:tls:mode:ISTIO_MUTUALEOF
  3. Create aServiceEntry in the egress namespace to explicitlyregister example.com in the mesh's service registry for theteam-xnamespace:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:example-com-extnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:example.comspec:hosts:-example.comports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'team-x'-'istio-egress'EOF
    Configuring different egress routing for each namespace

    The`exportTo` property controls which namespaces can usethe service entry. Network administrators use service entries to centrallycontrol the set of external hosts available to each namespace. ConfigureKubernetes RBAC permissions so that only network administrators can directlycreate and modify service entries. Consider creating automation with anapproval workflow so that application developers can request access to newexternal hosts.

  4. Create aVirtualService to route traffic to example.com through theegress gateway. There are two match conditions: the first condition directstraffic to the egress gateway, and the second directs traffic from theegress gateway to the destination host. TheexportTo property controlswhich namespaces can use the virtual service.

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:80weight:100exportTo:-'istio-egress'-'team-x'EOF
  5. Runistioctl analyze to check for configuration errors:

    ${ISTIOCTL}analyze-nistio-egress--revisionREVISION

    The output is similar to the following:

    ✔ No validation issues found when analyzing namespace: istio-egress.
  6. Send several requests through the egress gateway to the external site:

    foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--\curl-s-o/dev/null-w"%{http_code}\n"http://example.comdone

    You see200 status codes for all four responses.

  7. Verify that the requests were directed through the egress gateway bychecking the proxy access logs. First check the access log for the proxysidecar deployed with the test application:

    kubectl-nteam-xlogs-f$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})istio-proxy

    For each request you send, you see a log entry similar to the following:

    [2020-09-14T17:37:08.045Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 5 4 "-" "curl/7.67.0" "d57ea5ad-90e9-46d9-8b55-8e6e404a8f9b" "example.com" "10.1.4.12:8080" outbound|80||istio-egressgateway.istio-egress.svc.cluster.local 10.1.0.17:42140 93.184.216.34:80 10.1.0.17:60326 - -
  8. Also check the egress gateway access log:

    kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath="{.items[0].metadata.name}")istio-proxy

    For each request you send, you see an egress gateway access log entrysimilar to the following:

    [2020-09-14T17:37:08.045Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 4 3 "10.1.0.17" "curl/7.67.0" "095711e6-64ef-4de0-983e-59158e3c55e7" "example.com" "93.184.216.34:80" outbound|80||example.com 10.1.4.12:37636 10.1.4.12:8080 10.1.0.17:44404 outbound_.80_.target-egress-gateway-mTLS_.istio-egressgateway.istio-egress.svc.cluster.local -

Configure different routing for a second namespace

Configure routing for a second external host to learn how different externalconnectivity can be configured for different teams.

  1. Create aSidecar resource for theteam-y namespace:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Sidecarmetadata:name:defaultnamespace:team-yspec:outboundTrafficPolicy:mode:REGISTRY_ONLYegress:-hosts:-'istio-egress/*'-'team-y/*'EOF
  2. Deploy the test application to theteam-y namespace:

    kubectl-nteam-ycreate-f./test.yaml
  3. Register a second external host and export it to theteam-x andtheteam-y namespace:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:httpbin-org-extnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:httpbin.orgspec:hosts:-httpbin.orgports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'-'team-y'EOF
  4. Create a virtual service to route traffic to httpbin.org through theegress gateway:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-org-through-egress-gatewaynamespace:istio-egressspec:hosts:-httpbin.orggateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:httpbin.orgport:number:80weight:100exportTo:-'istio-egress'-'team-x'-'team-y'EOF
  5. Runistioctl analyze to check for configuration errors:

    ${ISTIOCTL}analyze-nistio-egress--revisionREVISION

    You see:

    ✔ No validation issues found when analyzing namespace: istio-egress.
  6. Make a request to httpbin.org from theteam-y test app:

    kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test-o\jsonpath={.items..metadata.name})-ctest--curl-Ihttp://httpbin.org

    You see a200 OK response.

  7. Also make a request to httpbin.org from theteam-x test app:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://httpbin.org

    You see a200 OK response.

  8. Attempt to make a request to example.com from theteam-y namespace:

    kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.com

    The request fails because there is no outbound route configured for theexample.com host.

Using Authorization Policy to provide additional control over traffic

In this tutorial, authorization policies for the egress gateway are created intheistio-egressnamespace. You can configure Kubernetes RBAC so that onlynetwork administrators can access theistio-egress namespace.

  1. Create anAuthorizationPolicy so that applications in theteam-xnamespace can connect to example.com but not to other external hosts whensending requests using port 80. The correspondingtargetPort on theegress gateway pods is 8080.

    cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-team-x-to-example-comnamespace:istio-egressspec:action:ALLOWrules:-from:-source:namespaces:-'team-x'to:-operation:hosts:-'example.com'when:-key:destination.portvalues:["8080"]EOF
  2. Verify that you can make a request to example.com from the testapplication in theteam-x namespace:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.com

    You see a200 OK response.

  3. Try to make a request to httpbin.org from the test application in theteam-x namespace:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-w" %{http_code}\n"\http://httpbin.org

    The request fails with anRBAC: access denied message and a 403Forbidden status code. You may need to wait a few seconds because there isoften a short delay before authorization policy takes effect.

  4. Authorization policies provide rich control over which traffic isallowed or denied. Apply the following authorization policy to allow thetest app in theteam-y namespace to make requests to httpbin.org by usingone particular URL path when sending requests using port 80. ThecorrespondingtargetPort on the egress gateway pods is 8080.

    cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-team-y-to-httpbin-teapotnamespace:istio-egressspec:action:ALLOWrules:-from:-source:namespaces:-'team-y'to:-operation:hosts:-httpbin.orgpaths:['/status/418']when:-key:destination.portvalues:["8080"]EOF
  5. Attempt to connect to httpbin.org from the test app in theteam-ynamespace:

    kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-w" %{http_code}\n"\http://httpbin.org

    The request fails with an RBAC: access denied message and a 403Forbidden status code.

  6. Now make a request to httpbin.org/status/418 from the same app:

    kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curlhttp://httpbin.org/status/418

    The request succeeds because the path matches the pattern in theauthorization policy. The output is similar to the following:

       -=[ teapot ]=-      _...._    .'  _ _ `.   | ."` ^ `". _,   \_;`"---"`|//     |       ;/     \_     _/       `"""`

TLS origination at the egress gateway

You can configure egress gateways toupgrade (originate) plain HTTPrequests to TLS or mutual TLS. Allowing applications to make plain HTTP requestshas several advantages when used with Istio mutual TLS and TLS origination. Formore information,see the best practices guide.

TLS origination at egress gateway

  1. Create aDestinationRule. The DestinationRule specifies that thegateway originate a TLS connection to example.com.

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:example-com-originate-tlsnamespace:istio-egressspec:host:example.comsubsets:-name:example-com-originate-TLStrafficPolicy:portLevelSettings:-port:number:443tls:mode:SIMPLEsni:example.comEOF
  2. Update the virtual service for example.com so that requests to port 80on the gateway areupgraded to TLS on port 443 when they are sent to thedestination host:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-mesh-istio-egress/egress-gatewayhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:443subset:example-com-originate-TLSweight:100EOF
  3. Make several requests to example.com from the test app in theteam-xnamespace:

    foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comdone

    As before, the requests succeed with200 OK responses.

  4. Check the egress gateway log to verify that the gateway routed therequests to the destination host by originating TLS connections:

    kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath="    {.items[0].metadata.name}")istio-proxy

    The output is similar to the following:

    [2020-09-24T17:58:02.548Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 6 5 "10.1.1.15" "curl/7.67.0" "83a77acb-d994-424d-83da-dd8eac902dc8" "example.com" "93.184.216.34:443" outbound|443|example-com-originate-TLS|example.com 10.1.4.31:49866 10.1.4.31:8080 10.1.1.15:37334 outbound_.80_.target-egress-gateway-mTLS_.istio-egressgateway.istio-egress.svc.cluster.local -

    The proxy sidecar sent the request to the gateway using port 80 and TLSoriginated on port 443 to send the request to the destination host.

Note: You can configure additionalclient settings for TLS and mutual TLS connections that originate at the egress gateway. Forexample, you can specify client certificate credentials and certificateauthority certificates for verifying presented server certificates.

Pass-through of HTTPS/TLS connections

Your existing applications might already be using TLS connections whencommunicating with external services. You can configure the egress gateway topass TLS connections through without decrypting them.

Caution: The gateway treats the encrypted pass-through TLS connection asa TCP connection and can not read any HTTP related attributes or any signed andtrusted metadata. Youcan't use authorization policies to allow or denypass-through traffic based on attributes of the request.The next steps of this tutorial create an extra authorization policy to allowany traffic that is sent to the egress gateway on port 443. That trafficcan be unencrypted, and the policy doesn't restrict connections based onparticular sources and destinations inside or outside of the mesh. If you needto apply authorization policy, we recommend that you avoid TLS pass-through.

tls pass through

  1. Modify your configuration so that the egress gateway uses TLSpass-through for connections to port 443:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:egress-gatewaynamespace:istio-egressspec:selector:istio:egressgatewayservers:-port:number:80name:httpsprotocol:HTTPShosts:-'*'tls:mode:ISTIO_MUTUAL-port:number:443name:tlsprotocol:TLShosts:-'*'tls:mode:PASSTHROUGHEOF
  2. Update theDestinationRule pointing to the egress gateway to add asecond subset for port 443 on the gateway. This new subset doesn't usemutual TLS. Istio mutual TLS is not supported for pass-through of TLSconnections. Connections on port 80 still use mTLS:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:DestinationRulemetadata:name:target-egress-gatewaynamespace:istio-egressspec:host:istio-egressgateway.istio-egress.svc.cluster.localsubsets:-name:target-egress-gateway-mTLStrafficPolicy:portLevelSettings:-port:number:80tls:mode:ISTIO_MUTUAL-name:target-egress-gateway-TLS-passthroughEOF
  3. Update the virtual service for example.com so that TLS traffic on port443 is passed through the gateway:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-mesh-istio-egress/egress-gatewayhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:443subset:example-com-originate-TLSweight:100tls:-match:-gateways:-meshport:443sniHosts:-example.comroute:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-TLS-passthroughport:number:443-match:-gateways:-istio-egress/egress-gatewayport:443sniHosts:-example.comroute:-destination:host:example.comport:number:443weight:100exportTo:-'istio-egress'-'team-x'EOF
  4. Update the virtual service for httpbin.org so that TLS traffic on port443 is passed through the gateway:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-org-through-egress-gatewaynamespace:istio-egressspec:hosts:-httpbin.orggateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:httpbin.orgport:number:80weight:100tls:-match:-gateways:-meshport:443sniHosts:-httpbin.orgroute:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-TLS-passthroughport:number:443-match:-gateways:-istio-egress/egress-gatewayport:443sniHosts:-httpbin.orgroute:-destination:host:httpbin.orgport:number:443weight:100exportTo:-'istio-egress'-'team-x'-'team-y'EOF
  5. Add an authorization policy that accepts any kind of traffic sent toport 443 of the egress gateway service. The correspondingtargetPort onthe gateway pods is 8443.

    cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-all-443namespace:istio-egressspec:action:ALLOWrules:-when:-key:destination.portvalues:["8443"]EOF
  6. Runistioctl analyze to check for configuration errors:

    ${ISTIOCTL}analyze-nistio-egress--revisionREVISION

    You see:

    ✔ No validation issues found when analyzing namespace: istio-egress.
  7. Make a plain HTTP request to example.com from the test application intheteam-xnamespace:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.com

    The request succeeds with a200 OK response.

  8. Now make several TLS (HTTPS) requests from the test application in theteam-x namespace:

    foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-o/dev/null\-w"%{http_code}\n"\https://example.comdone

    You see 200 responses.

  9. Look at the egress gateway log again:

    kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath="{.items[0].metadata.name}")istio-proxy

    You see log entries similar to the following:

    [2020-09-24T18:04:38.608Z] "- - -" 0 - "-" "-" 1363 5539 10 - "-" "-" "-" "-" "93.184.216.34:443" outbound|443||example.com 10.1.4.31:51098 10.1.4.31:8443 10.1.1.15:57030 example.com -

    The HTTPS request has been treated as TCP traffic and passed through thegateway to the destination host, so no HTTP information is included in thelog.

Using Kubernetes NetworkPolicy as an additional control

There are many scenarios in which an application can bypass a sidecar proxy.You can use KubernetesNetworkPolicy to additionally specify which connectionsworkloads are allowed to make. After a single network policy is applied, allconnections that aren't specifically allowed are denied.

This tutorial only considers egress connections and egress selectors fornetwork policies. If you control ingress with network policies on your ownclusters, then you must create ingress policies to correspond to your egresspolicies. For example, if you allow egress from workloads in theteam-xnamespace to theteam-y namespace, you must also allow ingress to theteam-ynamespace from theteam-x namespace.

  1. Allow workloads and proxies deployed in theteam-x namespace toconnect toistiod and the egress gateway:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-control-planenamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":istio-systempodSelector:matchLabels:istio:istiod-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":istio-egresspodSelector:matchLabels:istio:egressgatewayEOF
  2. Allow workloads and proxies to query DNS:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-dnsnamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":kube-systemports:-port:53protocol:UDP-port:53protocol:TCPEOF
  3. Allow workloads and proxies to connect to the IPs that serve GoogleAPIs and services, including Cloud Service Mesh certificate authority:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-google-apisnamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-ipBlock:cidr:199.36.153.4/30-ipBlock:cidr:199.36.153.8/30EOF
  4. Allow workloads and proxies to connect to the GKEmetadata server:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-metadata-servernamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:# For GKE data plane v2-ipBlock:cidr:169.254.169.254/32-to:# For GKE data plane v1-ipBlock:cidr:127.0.0.1/32# Prior to 1.21.0-gke.1000-ipBlock:cidr:169.254.169.252/32# 1.21.0-gke.1000 and laterports:-protocol:TCPport:987-protocol:TCPport:988EOF
  5. Optional: Allow workloads and proxies in theteam-x namespace to makeconnections to each other:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-same-namespacenamespace:team-xspec:podSelector:{}ingress:-from:-podSelector:{}egress:-to:-podSelector:{}EOF
  6. Optional: Allow workloads and proxies in theteam-x namespace to makeconnections to workloads deployed by a different team:

    cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-team-ynamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":team-yEOF
  7. Connections between sidecar proxies persist. Existing connections arenot closed when you apply a new network policy. Restart the workloads inthe team-x namespace to make sure existing connections are closed:

    kubectl-nteam-xrolloutrestartdeployment
  8. Verify that you can still make an HTTP request to example.com from thetest application in theteam-xnamespace:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.com

    The request succeeds with a200 OK response.

Directly accessing Google APIs using Private Google Access and IAM permissions

Google's APIs and services are exposed using external IP addresses. When podswith VPC-native alias IP addresses make connections to Google APIs by usingPrivate Google Access, thetraffic never leaves Google's network.

When you set up the infrastructure for this tutorial, you enabledPrivate Google Access for the subnet used by GKEpods. To allow access to the IP addresses used by Private Google Access,you created a route, a VPC firewall rule, and a private DNS zone. Thisconfiguration lets pods reach Google APIs directly without sending trafficthrough the egress gateway. You can control which APIs are available to specificKubernetes service accounts (and hence namespaces) by usingWorkload Identity Federation for GKE andIAM. Istio authorization doesn't take effect because the egressgateway is not handling connections to the Google APIs.

Before pods can call Google APIs, you must use IAM to grantpermissions. The cluster you are using for this tutorial is configured to useWorkload Identity Federation for GKE, which allows a Kubernetes service account to act as aGoogle service account.

  1. Create a Google service account for your application to use:

    gcloudiamservice-accountscreatesa-test-app-team-x
  2. Allow the Kubernetes service account to impersonate the Google serviceaccount:

    gcloudiamservice-accountsadd-iam-policy-binding\--roleroles/iam.workloadIdentityUser\--member"serviceAccount:${PROJECT_ID}.svc.id.goog[team-x/test]"\sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.com
  3. Annotate the Kubernetes service account for the test app in theteam-x namespace with the email address of the Google service account:

    cat <<EOF | kubectl apply -f -apiVersion:v1kind:ServiceAccountmetadata:annotations:iam.gke.io/gcp-service-account:sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.comname:testnamespace:team-xEOF
  4. The test application pod must be able to access the Google metadataserver (running as a DaemonSet) to obtain temporary credentials for callingGoogle APIs. Create a service entry for the GKEmetadata server:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:metadata-google-internalnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:metadata.google.internalspec:hosts:-metadata.google.internalports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'EOF
  5. Also create a service entry for private.googleapis.com andstorage.googleapis.com:

    cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:private-googleapis-comnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:googleapis.comspec:hosts:-private.googleapis.com-storage.googleapis.comports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'EOF
  6. Verify that the Kubernetes service account is correctly configured toact as the Google service account:

    kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--gcloudauthlist

    You see the Google service account listed as the active and only identity.

  7. Create a test file in a Cloud Storage bucket:

    echo"Hello, World!" >/tmp/hellogcloudstoragebucketscreategs://${PROJECT_ID}-bucketgcloudstoragecp/tmp/hellogs://${PROJECT_ID}-bucket/
  8. Grant permission for the service account to list and view files in thebucket:

    gcloudstoragebucketsadd-iam-policy-bindinggs://${PROJECT_ID}-bucket/\--member=serviceAccount:sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.com\--role=roles/storage.objectViewer
  9. Verify that the test application can access the test bucket:

    kubectl-nteam-xexec-it\$(kubectl-nteam-xgetpod-lapp=test-ojsonpath={.items..metadata.name})\-ctest\--gcloudstoragecatgs://${PROJECT_ID}-bucket/hello

    You see:

    Hello, World!

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

To avoid incurring charges to your Google Cloud account for the resourcesused in this tutorial, complete the steps in the following sections.:

Delete the project

The easiest way to eliminate billing is to delete the project you created forthe tutorial.

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.