You are viewing archived v1.23 Service Mesh documentation.
Available versions
Cloud Service Mesh latest
Cloud Service Mesh 1.26 archive
Cloud Service Mesh 1.24 archive
Cloud Service Mesh 1.24 archive
Cloud Service Mesh 1.23 archive
Cloud Service Mesh 1.22 archive
Cloud Service Mesh 1.21 archive
Cloud Service Mesh 1.20 archive
Anthos Service Mesh 1.19 archive
Using Cloud Service Mesh egress gateways on GKE clusters: Tutorial Stay organized with collections Save and categorize content based on your preferences.
This tutorial shows how to useCloud Service Meshegress gateways and other Google Cloud controls to secure outbound traffic(egress) from workloads deployed on a Google Kubernetes Engine cluster. The tutorialis intended as a companion to theBest Practices for using Cloud Service Mesh egress gateways on GKE clusters.
The intended audience for this tutorial includes network, platform, andsecurity engineers who administer Google Kubernetes Engine clusters used by one ormore software delivery teams. The controls described here are especially usefulfor organizations that must demonstrate compliance with regulations—for example,GDPR andPCI.
Objectives
- Set up the infrastructure for running Cloud Service Mesh:
- CustomVPC network and private subnet
- Cloud NAT for internet access
- Private GKE clusterwith an extra node pool for egress gateway pods
- Restrictive egressVPC firewall rules;only gateway nodes can reach external hosts
- Private Google Access forconnecting toContainer Registry and Google APIs
- Install Cloud Service Mesh.
- Install egress gateway proxies running on a dedicated node pool.
- Configure multi-tenant routing rules for external traffic through theegress gateway:
- Applications in namespace
team-xcan connect toexample.com - Applications in namespace
team-ycan connect tohttpbin.org
- Applications in namespace
- Use the
Sidecarresource to restrict the scope of the sidecar proxyegress configuration for each namespace. - Configure authorization policies to enforce egress rules.
- Configure the egress gateway to upgrade plain HTTP requests to TLS (TLSorigination).
- Configure the egress gateway to pass-through TLS traffic.
- Set up Kubernetes network policies as an additional egress control.
- Configure direct access to Google APIs usingPrivate Google Access and Identity and Access Management (IAM)permissions.
Costs
In this document, you use the following billable components of Google Cloud:
- Compute Engine
- Google Kubernetes Engine (GKE)
- Container Registry
- Cloud Service Mesh
- Cloud Load Balancing
- Cloud NAT
- Networking
- Cloud Storage
To generate a cost estimate based on your projected usage, use thepricing calculator.
When you finish this tutorial, you can avoid ongoing costs by deleting theresources you created. For more information, seeCleaning up.
Before you begin
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
In the Google Cloud console, activate Cloud Shell.
Create a working directory to use while following the tutorial:
mkdir-p~/WORKING_DIRECTORYcd~/WORKING_DIRECTORYCreate a shell script to initialize your environment for the tutorial.Replace and edit the variables according to your project and preferences.Run this script with the
sourcecommand to reinitialize your environmentif your shell session expires:cat <<'EOF' >./init-egress-tutorial.sh#! /usr/bin/env bashPROJECT_ID=YOUR_PROJECT_IDREGION=REGIONZONE=ZONEgcloudconfigsetproject${PROJECT_ID}gcloudconfigsetcompute/region${REGION}gcloudconfigsetcompute/zone${ZONE}EOFEnable
compute.googleapis.com:gcloudservicesenablecompute.googleapis.com--project=YOUR_PROJECT_IDMake the script executable and run it with the
sourcecommand toinitialize your environment. SelectYif prompted to enablecompute.googleapis.com:chmod+x./init-egress-tutorial.shsource./init-egress-tutorial.sh
Setting up the infrastructure
Create a VPC network and subnet
Create a new VPC network:
gcloudcomputenetworkscreatevpc-network\--subnet-modecustomCreate a subnet for the cluster to run in with pre-assigned secondaryIP address ranges for Pods and services. Private Google Access isenabled so that applications with only internal IP addresses can reachGoogle APIs and services:
gcloudcomputenetworkssubnetscreatesubnet-gke\--networkvpc-network\--range10.0.0.0/24\--secondary-rangepods=10.1.0.0/16,services=10.2.0.0/20\--enable-private-ip-google-access
Configure Cloud NAT
Cloud NAT allows workloads without external IP addresses to connect todestinations on the internet and receive inbound responses from thosedestinations.
Note: Cloud Router and Cloud NAT are used only to configure NATfor external internet connectivity; however, they are not actually in the pathof network traffic. NAT configuration is applied at the software definednetworking layer.Create a Cloud Router:
gcloudcomputerouterscreatenat-router\--networkvpc-networkAdd a NAT configuration to the router:
gcloudcomputeroutersnatscreatenat-config\--routernat-router\--nat-all-subnet-ip-ranges\--auto-allocate-nat-external-ips
Create service accounts for each GKE node pool
Create two service accounts for use by the two GKE nodepools. A separate service account is assigned to each node pool so that you canapply VPC firewall rules to specific nodes.
Create a service account for use by the nodes in the default node pool:
gcloudiamservice-accountscreatesa-application-nodes\--description="SA for application nodes"\--display-name="sa-application-nodes"Create a service account for use by the nodes in the gateway node pool:
gcloudiamservice-accountscreatesa-gateway-nodes\--description="SA for gateway nodes"\--display-name="sa-gateway-nodes"
Grant permissions to the service accounts
Add a minimal set of IAM roles to the application and gatewayservice accounts. These roles are required for logging, monitoring, andpulling private container images from Container Registry.
project_roles=(roles/logging.logWriterroles/monitoring.metricWriterroles/monitoring.viewerroles/storage.objectViewer)forrolein"${project_roles[@]}"dogcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:sa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--role="$role"gcloudprojectsadd-iam-policy-binding${PROJECT_ID}\--member="serviceAccount:sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--role="$role"doneCreating the firewall rules
In the following steps, you apply a firewall rule to the VPC network so that,by default, all egress traffic is denied. Specific connectivity is required forthe cluster to function and for gateway nodes to be able to reach destinationsoutside of the VPC. A minimal set of specific firewall rules overrides thedefault deny-all rule to allow the necessary connectivity.
Create a default (low priority) firewall rule to deny all egress fromthe VPC network:
gcloudcomputefirewall-rulescreateglobal-deny-egress-all\--actionDENY\--directionEGRESS\--rulesall\--destination-ranges0.0.0.0/0\--networkvpc-network\--priority65535\--description"Default rule to deny all egress from the network."Create a rule to allow only those nodes with the gateway serviceaccount to reach the internet:
gcloudcomputefirewall-rulescreategateway-allow-egress-web\--actionALLOW\--directionEGRESS\--rulestcp:80,tcp:443\--target-service-accountssa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--networkvpc-network\--priority1000\--description"Allow the nodes running the egress gateways to connect to the web"Allow nodes to the reach the Kubernetes control plane:
gcloudcomputefirewall-rulescreateallow-egress-to-api-server\--actionALLOW\--directionEGRESS\--rulestcp:443,tcp:10250\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges10.5.0.0/28\--networkvpc-network\--priority1000\--description"Allow nodes to reach the Kubernetes API server."Optional: This firewall rule is not needed if you use Managed Cloud Service Mesh.
Cloud Service Mesh uses webhooks when injecting sidecar proxies intoworkloads. Allow the GKE API server to call the webhooksexposed by the service mesh control plane running on the nodes:
gcloudcomputefirewall-rulescreateallow-ingress-api-server-to-webhook\--actionALLOW\--directionINGRESS\--rulestcp:15017\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--source-ranges10.5.0.0/28\--networkvpc-network\--priority1000\--description"Allow the API server to call the webhooks exposed by istiod discovery"Allow egress connectivity between Nodes and Pods running on thecluster. GKE automatically creates acorresponding ingress rule. No rule is required for Service connectivitybecause the iptables routing chain always converts Service IP addresses toPod IP addresses.
gcloudcomputefirewall-rulescreateallow-egress-nodes-and-pods\--actionALLOW\--directionEGRESS\--rulesall\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges10.0.0.0/24,10.1.0.0/16\--networkvpc-network\--priority1000\--description"Allow egress to other Nodes and Pods"Allow access to the reserved sets of IP addresses used byPrivate Google Access for serving Google APIs,Container Registry, and other services:
gcloudcomputefirewall-rulescreateallow-egress-gcp-apis\--actionALLOW\--directionEGRESS\--rulestcp\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--destination-ranges199.36.153.8/30\--networkvpc-network\--priority1000\--description"Allow access to the VIPs used by Google Cloud APIs (Private Google Access)"If you are usingVPC Service Controls, use199.36.153.8/30 instead. For more information, see the following section..
As an alternative to using the reserved set of internal IP addresses, youcan expose aPrivate Service Connect endpoint with an internal IPaddress of your choice.
Allow the Google Cloud health checker service to access podsrunning in the cluster. Seehealth checks:for more information.
gcloudcomputefirewall-rulescreateallow-ingress-gcp-health-checker\--actionALLOW\--directionINGRESS\--rulestcp:80,tcp:443\--target-service-accountssa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com,sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com\--source-ranges35.191.0.0/16,130.211.0.0/22,209.85.152.0/22,209.85.204.0/22\--networkvpc-network\--priority1000\--description"Allow workloads to respond to Google Cloud health checks"
Configuring private access to Google Cloud APIs
Private Google Access enables VMs and Pods that only have internal IPaddresses to access Google APIs and services. Although Google APIs and servicesare served from external IPs, traffic from the nodes never leaves the Googlenetwork when using Private Google Access.
Private Google Access providesdifferent options and VIPs for connecting to Google APIs and services. Thistutorial uses private.googleapis.com and its corresponding VIP. If you useVPC Service Controls and want to blockaccess to APIs that do not support VPC Service Controls, userestricted.googleapis.com and the 199.36.153.8/30 VIP.
As an alternative to using the reserved set of internal IP addresses, youcan expose APIs usingPrivate Service Connectendpoints with IP addresses of your choice.
Enable the Cloud DNS API:
gcloudservicesenabledns.googleapis.comCreate a private DNS zone, aCNAME, andA records so that nodes andworkloads can connect to Google APIs and services usingPrivate Google Access and theprivate.googleapis.com hostname:
gclouddnsmanaged-zonescreateprivate-google-apis\--description"Private DNS zone for Google APIs"\--dns-namegoogleapis.com\--visibilityprivate\--networksvpc-networkgclouddnsrecord-setstransactionstart--zoneprivate-google-apisgclouddnsrecord-setstransactionaddprivate.googleapis.com.\--name"*.googleapis.com"\--ttl300\--typeCNAME\--zoneprivate-google-apisgclouddnsrecord-setstransactionadd"199.36.153.8"\"199.36.153.9""199.36.153.10""199.36.153.11"\--nameprivate.googleapis.com\--ttl300\--typeA\--zoneprivate-google-apisgclouddnsrecord-setstransactionexecute--zoneprivate-google-apisConfiguring private access to Container Registry
Create a private DNS zone, aCNAME and anA record so that nodes canconnect to Container Registry using Private Google Access and thegcr.io hostname:
gclouddnsmanaged-zonescreateprivate-gcr-io\--description"private zone for Container Registry"\--dns-namegcr.io\--visibilityprivate\--networksvpc-networkgclouddnsrecord-setstransactionstart--zoneprivate-gcr-iogclouddnsrecord-setstransactionaddgcr.io.\--name"*.gcr.io"\--ttl300\--typeCNAME\--zoneprivate-gcr-iogclouddnsrecord-setstransactionadd"199.36.153.8""199.36.153.9""199.36.153.10""199.36.153.11"\--namegcr.io\--ttl300\--typeA\--zoneprivate-gcr-iogclouddnsrecord-setstransactionexecute--zoneprivate-gcr-ioCreate a private GKE cluster
Find the external IP address of your Cloud Shell so that youcan add it to the list of networks that are allowed to access yourcluster's API server:
SHELL_IP=$(digTXT-4+short@ns1.google.como-o.myaddr.l.google.com)After a period of inactivity, the external IP address of yourCloud Shell VM can change. If that happens, you must update yourcluster's list of authorized networks. Add the following command to yourinitialization script:
cat <<'EOF' >>./init-egress-tutorial.shSHELL_IP=$(digTXT-4+short@ns1.google.como-o.myaddr.l.google.com)gcloudcontainerclustersupdatecluster1\--enable-master-authorized-networks\--master-authorized-networks${SHELL_IP//\"}/32EOFEnable the Google Kubernetes Engine API:
gcloudservicesenablecontainer.googleapis.comCreate a private GKE cluster:
gcloudcontainerclusterscreatecluster1\--enable-ip-alias\--enable-private-nodes\--release-channel"regular"\--enable-master-authorized-networks\--master-authorized-networks${SHELL_IP//\"}/32\--master-ipv4-cidr10.5.0.0/28\--enable-dataplane-v2\--service-account"sa-application-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--machine-type"e2-standard-4"\--network"vpc-network"\--subnetwork"subnet-gke"\--cluster-secondary-range-name"pods"\--services-secondary-range-name"services"\--workload-pool"${PROJECT_ID}.svc.id.goog"\--zone${ZONE}It takes a few minutes for the cluster to be created. The cluster hasprivate nodes with internal IP addresses. Pods and services are assignedIPs from the named secondary ranges that you defined when creating the VPCsubnet.
Cloud Service Mesh with an in-cluster control plane requires the clusternodes to use a machine type that has at least 4 vCPUs.
Google recommends that the cluster be subscribed to the "regular" releasechannel to ensure that nodes are running a Kubernetes version that issupported by Cloud Service Mesh.
For more information on the prerequisites for running Cloud Service Meshwith an in-cluster control plane, see thein-cluster prerequisites.
For more information on the requirements and limitations for runningmanaged Cloud Service Mesh see themanaged Cloud Service Mesh supported features.
Workload Identity Federation for GKE isenabled on the cluster. Cloud Service Mesh requires Workload Identity Federation for GKE andis the recommended way to access Google APIs from GKEworkloads.
Create a node pool calledgateway. This node pool is where the egressgateway is deployed. The
dedicated=gateway:NoScheduletaintis added to every node in the gateway node pool.gcloudcontainernode-poolscreate"gateway"\--cluster"cluster1"\--machine-type"e2-standard-4"\--node-taintsdedicated=gateway:NoSchedule\--service-account"sa-gateway-nodes@${PROJECT_ID}.iam.gserviceaccount.com"\--num-nodes"1"Kubernetestaints and tolerationshelp ensure that only egress gateway Pods run on nodes in the gateway nodepool.
Download credentials so that you can connect to the cluster with kubectl:
gcloudcontainerclustersget-credentialscluster1Verify that the gateway nodes have the correct taint:
kubectlgetnodes-lcloud.google.com/gke-nodepool=gateway-oyaml\-o=custom-columns='name:metadata.name,taints:spec.taints[?(@.key=="dedicated")]'The output is similar to the following:
name taintsgke-cluster1-gateway-9d65b410-cffs map[effect:NoSchedule key:dedicated value:gateway]
Installing and setting up Cloud Service Mesh
Follow one of the installation guides for Cloud Service Mesh:
Once you have installed Cloud Service Mesh, stop and return to this tutorialwithout installing ingress or egress gateways.
Install an egress gateway
Create a Kubernetes namespace for the egress gateway:
kubectlcreatenamespaceistio-egressEnable the namespace for injection. The steps depend on yourcontrol plane implementation.
Managed (TD)
Apply the default injection label to the namespace:
kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwriteManaged (Istiod)
Recommended: Run the following command to apply the default injection label to the namespace:
kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwriteIf you are an existing user with the Managed Istiod control plane:We recommend that you use default injection, but revision-based injection issupported. Use the following instructions:
Run the following command to locate the available release channels:
kubectl-nistio-systemgetcontrolplanerevisionThe output is similar to the following:
Note: If two control plane revisions appear in the list above, remove one. Having multiple control plane channels in the cluster is not supported.NAME AGEasm-managed-rapid 6d7hIn the output, the value under the
NAMEcolumn is the revision label that corresponds to the availablerelease channel for the Cloud Service Mesh version.Apply the revision label to the namespace:
kubectllabelnamespaceistio-egress\istio-injection-istio.io/rev=REVISION_LABEL--overwrite
In-cluster
Recommended: Run the following command to apply the default injection label to the namespace:
kubectllabelnamespaceistio-egress\istio.io/rev-istio-injection=enabled--overwriteWe recommend that you use default injection, but revision-based injection is supported:Use the following instructions:
Use the following command to locate the revision label on
istiod:kubectlgetdeploy-nistio-system-lapp=istiod-o\jsonpath={.items[*].metadata.labels.'istio\.io\/rev'}'{"\n"}'Apply the revision label to the namespace. In the following command,
REVISION_LABELis the value of theistiodrevisionlabel that you noted in the previous step.kubectllabelnamespaceistio-egress\istio-injection-istio.io/rev=REVISION_LABEL--overwrite
Create anoperator manifest for the egress gateway:
cat <<EOF >egressgateway-operator.yamlapiVersion:install.istio.io/v1alpha1kind:IstioOperatormetadata:name:egressgateway-operatorannotations:config.kubernetes.io/local-config:"true"spec:profile:emptyrevision:REVISIONcomponents:egressGateways:-name:istio-egressgatewaynamespace:istio-egressenabled:truevalues:gateways:istio-egressgateway:injectionTemplate:gatewaytolerations:-key:"dedicated"operator:"Equal"value:"gateway"nodeSelector:cloud.google.com/gke-nodepool:"gateway"EOFDownload the
istioctltool. You must use version 1.16.2-asm.2 or newer evenif you are using Cloud Service Mesh version 1.15 or lower. SeeDownloading the correct istioctl version.After extracting the downloaded archive, set an environment variable to holdthe path to the
istioctltool and add it to your initialization script:ISTIOCTL=$(find"$(pwd-P)"-nameistioctl)echo"ISTIOCTL=\"${ISTIOCTL}\"" >>./init-egress-tutorial.shCreate the egress gateway installation manifest using the operatormanifest and
istioctl:${ISTIOCTL}manifestgenerate\--filenameegressgateway-operator.yaml\--outputegressgateway\--cluster-specificYou can view the generated manifest at'egressgateway/Base/Pilot/EgressGateways/EgressGateways.yaml'.
When deployed it creates standard Kubernetes resources such asDeployment, Service, ServiceAccount, Role, RoleBinding,HorizontalPodAutoscaler, and PodDisruptionBudget. Using istioctl and theoperator manifest are a convenient way to generate the deployment manifest.For your own production mesh you can generate and manage the deploymentmanifest using your preferred tools.
Install the egress gateway:
kubectlapply--recursive--filenameegressgateway/Check that the egress gateway is running on nodes in the
gatewaynodepool:kubectlgetpods-nistio-egress-owideThe egress gateway pods have
affinityfor nodes in thegatewaynode pooland a toleration that lets them run on the tainted gateway nodes. Examinethe node affinity and tolerations for the egress gateway pods:kubectl-nistio-egressgetpod-listio=egressgateway\-o=custom-columns='name:metadata.name,node-affinity:spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms,tolerations:spec.tolerations[?(@.key=="dedicated")]'The output is similar to the following:
name node-affinity tolerationsistio-egressgateway-754d9684d5-jjkdz [map[matchExpressions:[map[key:cloud.google.com/gke-nodepool operator:In values:[gateway]]]]] map[key:dedicated operator:Equal value:gateway]
The operator manifest for the egress gateway specifies a toleration and a nodeSelector so that the deployed gateway will only run on gateway nodes.
Make sure that only network administrators can apply the gateway toleration to Pods. Unauthorized use of the gateway toleration allows deployed pods to impersonate the gateway and run on gateway nodes. Pods running on gateway nodes can connect directly to external hosts. Restrict use of the gateway toleration as part of deployment pipelines or by using Kubernetes admission control. AnGKE Enterprise Policy Controller constraint can be used in both scenarios.
Enable Envoy access logging
The steps required to enable Envoy access logs depend on your Cloud Service Meshtype, either managed or in-cluster:
Managed
Follow the instructions toenable access logs in managed Cloud Service Mesh.
In-cluster
Follow the instructions toenable access logs in in-cluster Cloud Service Mesh.
Preparing the mesh and a test application
Make sure that STRICT mutual TLS is enabled. Apply a default
PeerAuthenticationpolicy for the mesh in theistio-systemnamespace:cat <<EOF | kubectl apply -f -apiVersion:"security.istio.io/v1beta1"kind:"PeerAuthentication"metadata:name:"default"namespace:"istio-system"spec:mtls:mode:STRICTEOFYou can override this configuration by creating
PeerAuthenticationresources in specific namespaces.Create namespaces to use for deploying test workloads. Later steps inthis tutorial explain how to configure different egress routing rules foreach namespace.
kubectlcreatenamespaceteam-xkubectlcreatenamespaceteam-yLabel the namespaces so that they can be selected by Kubernetes networkpolicies:
kubectllabelnamespaceteam-xteam=xkubectllabelnamespaceteam-yteam=yFor Cloud Service Mesh to automatically inject proxy sidecars, youset the control plane revision label on the workload namespaces:
kubectllabelnsteam-xistio.io/rev-istio-injection=enabled--overwritekubectllabelnsteam-yistio.io/rev-istio-injection=enabled--overwriteCreate a YAML file to use for making test deployments:
cat << 'EOF' > ./test.yamlapiVersion:v1kind:ServiceAccountmetadata:name:test---apiVersion:v1kind:Servicemetadata:name:testlabels:app:testspec:ports:-port:80name:httpselector:app:test---apiVersion:apps/v1kind:Deploymentmetadata:name:testspec:replicas:1selector:matchLabels:app:testtemplate:metadata:labels:app:testspec:serviceAccountName:testcontainers:-name:testimage:gcr.io/google.com/cloudsdktool/cloud-sdk:slimcommand:["/bin/sleep","infinity"]imagePullPolicy:IfNotPresentEOFDeploy the test application to the
team-xnamespace:kubectl-nteam-xcreate-f./test.yamlVerify that the test application is deployed to a node in the defaultpool and that a proxy sidecar container is injected. Repeat the followingcommand until the pod's status is
Running:kubectl -n team-x get po -l app=test -o wideThe output is similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-d5bdf6f4f-9nxfv 2/2 Running 0 19h 10.1.1.25 gke-cluster1-default-pool-f6c7a51f-wbzj2 out of 2 containers are
Running. One container is the testapplication and the other is the proxy sidecar.The Pod is running on a node in the default node pool.
Verify that it is not possible to make an HTTP request from the testcontainer to an external site:
kubectl-nteam-xexec-it\$(kubectl-nteam-xgetpod-lapp=test-ojsonpath={.items..metadata.name})\-ctest--curl-vhttp://example.comAn error message from the sidecar proxy is generated because the
global-deny-egress-allfirewall rule denies the upstream connection.
Using the Sidecar resource to restrict the scope of sidecar proxy configuration
You can use theSidecar resource to restrict the scope of the egress listener that is configured for sidecarproxies. To reduce configuration bloat and memory usage, it's a good practice toapply a defaultSidecar resource for every namespace.
The proxy that Cloud Service Mesh runs in the sidecar is Envoy. InEnvoy terminology,acluster is a logically similar group of upstream endpoints used asdestinations for load balancing.
Inspect the outbound clusters configured in the Envoy sidecar proxy forthe test pod by running the
istioctl proxy-configcommand:${ISTIOCTL}pcc$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name}).team-x--directionoutboundThere are approximately 11 Envoy clusters in the list, including somefor the egress gateway.
Restrict the proxy configuration to egress routes that have beenexplicitly defined withservice entries in the egress and
team-xnamespaces. Apply aSidecarresource to theteam-xnamespace:cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Sidecarmetadata:name:defaultnamespace:team-xspec:outboundTrafficPolicy:mode:REGISTRY_ONLYegress:-hosts:-'istio-egress/*'-'team-x/*'EOFSetting outbound traffic policy mode to
REGISTRY_ONLYrestricts theproxy configuration to include only those external hosts that have beenexplicitly added to the mesh's service registry by defining service entries.Setting
egress.hostsspecifies that the sidecar proxy only selectsroutes from the egress namespace that are made available by usingtheexportToattribute. The 'team-x/*' part includes any routes thathave been configured locally in theteam-xnamespace.View the outbound clusters configured in the Envoy sidecar proxy, andcompare them to the list of clusters that were configured before applyingthe
Sidecarresource:${ISTIOCTL}pcc$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name}).team-x--directionoutboundYou see clusters for the egress gateway and one for the test pod itself.
Configuring Cloud Service Mesh to route traffic through the egress gateway
Configure a
Gatewayfor HTTP traffic on port 80. TheGatewayselects the egress gateway proxy that you deployed to theegress namespace. TheGatewayconfiguration is applied to the egressnamespace and handles traffic for any host.cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:egress-gatewaynamespace:istio-egressspec:selector:istio:egressgatewayservers:-port:number:80name:httpsprotocol:HTTPShosts:-'*'tls:mode:ISTIO_MUTUALEOFCreate a
DestinationRulefor the egress gateway with mutual TLS forauthentication and encryption. Use a single shared destination rule for allexternal hosts.cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:target-egress-gatewaynamespace:istio-egressspec:host:istio-egressgateway.istio-egress.svc.cluster.localsubsets:-name:target-egress-gateway-mTLStrafficPolicy:tls:mode:ISTIO_MUTUALEOFCreate a
ServiceEntryin the egress namespace to explicitlyregister example.com in the mesh's service registry for theteam-xnamespace: Configuring different egress routing for each namespacecat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:example-com-extnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:example.comspec:hosts:-example.comports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'team-x'-'istio-egress'EOFThe`exportTo` property controls which namespaces can usethe service entry. Network administrators use service entries to centrallycontrol the set of external hosts available to each namespace. ConfigureKubernetes RBAC permissions so that only network administrators can directlycreate and modify service entries. Consider creating automation with anapproval workflow so that application developers can request access to newexternal hosts.
Create a
VirtualServiceto route traffic to example.com through theegress gateway. There are two match conditions: the first condition directstraffic to the egress gateway, and the second directs traffic from theegress gateway to the destination host. TheexportToproperty controlswhich namespaces can use the virtual service.cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:80weight:100exportTo:-'istio-egress'-'team-x'EOFRun
istioctl analyzeto check for configuration errors:${ISTIOCTL}analyze-nistio-egress--revisionREVISIONThe output is similar to the following:
✔ No validation issues found when analyzing namespace: istio-egress.Send several requests through the egress gateway to the external site:
foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--\curl-s-o/dev/null-w"%{http_code}\n"http://example.comdoneYou see
200status codes for all four responses.Verify that the requests were directed through the egress gateway bychecking the proxy access logs. First check the access log for the proxysidecar deployed with the test application:
kubectl-nteam-xlogs-f$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})istio-proxyFor each request you send, you see a log entry similar to the following:
[2020-09-14T17:37:08.045Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 5 4 "-" "curl/7.67.0" "d57ea5ad-90e9-46d9-8b55-8e6e404a8f9b" "example.com" "10.1.4.12:8080" outbound|80||istio-egressgateway.istio-egress.svc.cluster.local 10.1.0.17:42140 93.184.216.34:80 10.1.0.17:60326 - -Also check the egress gateway access log:
kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath="{.items[0].metadata.name}")istio-proxyFor each request you send, you see an egress gateway access log entrysimilar to the following:
[2020-09-14T17:37:08.045Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 4 3 "10.1.0.17" "curl/7.67.0" "095711e6-64ef-4de0-983e-59158e3c55e7" "example.com" "93.184.216.34:80" outbound|80||example.com 10.1.4.12:37636 10.1.4.12:8080 10.1.0.17:44404 outbound_.80_.target-egress-gateway-mTLS_.istio-egressgateway.istio-egress.svc.cluster.local -
Configure different routing for a second namespace
Configure routing for a second external host to learn how different externalconnectivity can be configured for different teams.
Create a
Sidecarresource for theteam-ynamespace:cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Sidecarmetadata:name:defaultnamespace:team-yspec:outboundTrafficPolicy:mode:REGISTRY_ONLYegress:-hosts:-'istio-egress/*'-'team-y/*'EOFDeploy the test application to the
team-ynamespace:kubectl-nteam-ycreate-f./test.yamlRegister a second external host and export it to the
team-xandtheteam-ynamespace:cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:httpbin-org-extnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:httpbin.orgspec:hosts:-httpbin.orgports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'-'team-y'EOFCreate a virtual service to route traffic to httpbin.org through theegress gateway:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-org-through-egress-gatewaynamespace:istio-egressspec:hosts:-httpbin.orggateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:httpbin.orgport:number:80weight:100exportTo:-'istio-egress'-'team-x'-'team-y'EOFRun
istioctl analyzeto check for configuration errors:${ISTIOCTL}analyze-nistio-egress--revisionREVISIONYou see:
✔ No validation issues found when analyzing namespace: istio-egress.Make a request to httpbin.org from the
team-ytest app:kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test-o\jsonpath={.items..metadata.name})-ctest--curl-Ihttp://httpbin.orgYou see a
200 OKresponse.Also make a request to httpbin.org from the
team-xtest app:kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://httpbin.orgYou see a
200 OKresponse.Attempt to make a request to example.com from the
team-ynamespace:kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comThe request fails because there is no outbound route configured for the
example.comhost.
Using Authorization Policy to provide additional control over traffic
In this tutorial, authorization policies for the egress gateway are created intheistio-egressnamespace. You can configure Kubernetes RBAC so that onlynetwork administrators can access theistio-egress namespace.
Create an
AuthorizationPolicyso that applications in theteam-xnamespace can connect to example.com but not to other external hosts whensending requests using port 80. The correspondingtargetPorton theegress gateway pods is 8080.cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-team-x-to-example-comnamespace:istio-egressspec:action:ALLOWrules:-from:-source:namespaces:-'team-x'to:-operation:hosts:-'example.com'when:-key:destination.portvalues:["8080"]EOFVerify that you can make a request to example.com from the testapplication in the
team-xnamespace:kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comYou see a
200 OKresponse.Try to make a request to httpbin.org from the test application in the
team-xnamespace:kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-w" %{http_code}\n"\http://httpbin.orgThe request fails with an
RBAC: access deniedmessage and a 403Forbidden status code. You may need to wait a few seconds because there isoften a short delay before authorization policy takes effect.Authorization policies provide rich control over which traffic isallowed or denied. Apply the following authorization policy to allow thetest app in the
team-ynamespace to make requests to httpbin.org by usingone particular URL path when sending requests using port 80. ThecorrespondingtargetPorton the egress gateway pods is 8080.cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-team-y-to-httpbin-teapotnamespace:istio-egressspec:action:ALLOWrules:-from:-source:namespaces:-'team-y'to:-operation:hosts:-httpbin.orgpaths:['/status/418']when:-key:destination.portvalues:["8080"]EOFAttempt to connect to httpbin.org from the test app in the
team-ynamespace:kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-w" %{http_code}\n"\http://httpbin.orgThe request fails with an RBAC: access denied message and a 403Forbidden status code.
Now make a request to httpbin.org/status/418 from the same app:
kubectl-nteam-yexec-it$(kubectl-nteam-ygetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curlhttp://httpbin.org/status/418The request succeeds because the path matches the pattern in theauthorization policy. The output is similar to the following:
-=[ teapot ]=- _...._ .' _ _ `. | ."` ^ `". _, \_;`"---"`|// | ;/ \_ _/ `"""`
TLS origination at the egress gateway
You can configure egress gateways toupgrade (originate) plain HTTPrequests to TLS or mutual TLS. Allowing applications to make plain HTTP requestshas several advantages when used with Istio mutual TLS and TLS origination. Formore information,see the best practices guide.
Create a
DestinationRule. The DestinationRulespecifies that thegateway originate a TLS connection to example.com.cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:DestinationRulemetadata:name:example-com-originate-tlsnamespace:istio-egressspec:host:example.comsubsets:-name:example-com-originate-TLStrafficPolicy:portLevelSettings:-port:number:443tls:mode:SIMPLEsni:example.comEOFUpdate the virtual service for example.com so that requests to port 80on the gateway are
upgradedto TLS on port 443 when they are sent to thedestination host:cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-mesh-istio-egress/egress-gatewayhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:443subset:example-com-originate-TLSweight:100EOFMake several requests to example.com from the test app in the
team-xnamespace:foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comdoneAs before, the requests succeed with
200 OKresponses.Check the egress gateway log to verify that the gateway routed therequests to the destination host by originating TLS connections:
kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath=" {.items[0].metadata.name}")istio-proxyThe output is similar to the following:
[2020-09-24T17:58:02.548Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 6 5 "10.1.1.15" "curl/7.67.0" "83a77acb-d994-424d-83da-dd8eac902dc8" "example.com" "93.184.216.34:443" outbound|443|example-com-originate-TLS|example.com 10.1.4.31:49866 10.1.4.31:8080 10.1.1.15:37334 outbound_.80_.target-egress-gateway-mTLS_.istio-egressgateway.istio-egress.svc.cluster.local -The proxy sidecar sent the request to the gateway using port 80 and TLSoriginated on port 443 to send the request to the destination host.
Pass-through of HTTPS/TLS connections
Your existing applications might already be using TLS connections whencommunicating with external services. You can configure the egress gateway topass TLS connections through without decrypting them.
Caution: The gateway treats the encrypted pass-through TLS connection asa TCP connection and can not read any HTTP related attributes or any signed andtrusted metadata. Youcan't use authorization policies to allow or denypass-through traffic based on attributes of the request.The next steps of this tutorial create an extra authorization policy to allowany traffic that is sent to the egress gateway on port 443. That trafficcan be unencrypted, and the policy doesn't restrict connections based onparticular sources and destinations inside or outside of the mesh. If you needto apply authorization policy, we recommend that you avoid TLS pass-through.Modify your configuration so that the egress gateway uses TLSpass-through for connections to port 443:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:Gatewaymetadata:name:egress-gatewaynamespace:istio-egressspec:selector:istio:egressgatewayservers:-port:number:80name:httpsprotocol:HTTPShosts:-'*'tls:mode:ISTIO_MUTUAL-port:number:443name:tlsprotocol:TLShosts:-'*'tls:mode:PASSTHROUGHEOFUpdate the
DestinationRulepointing to the egress gateway to add asecond subset for port 443 on the gateway. This new subset doesn't usemutual TLS. Istio mutual TLS is not supported for pass-through of TLSconnections. Connections on port 80 still use mTLS:cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:DestinationRulemetadata:name:target-egress-gatewaynamespace:istio-egressspec:host:istio-egressgateway.istio-egress.svc.cluster.localsubsets:-name:target-egress-gateway-mTLStrafficPolicy:portLevelSettings:-port:number:80tls:mode:ISTIO_MUTUAL-name:target-egress-gateway-TLS-passthroughEOFUpdate the virtual service for example.com so that TLS traffic on port443 is passed through the gateway:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1alpha3kind:VirtualServicemetadata:name:example-com-through-egress-gatewaynamespace:istio-egressspec:hosts:-example.comgateways:-mesh-istio-egress/egress-gatewayhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:example.comport:number:443subset:example-com-originate-TLSweight:100tls:-match:-gateways:-meshport:443sniHosts:-example.comroute:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-TLS-passthroughport:number:443-match:-gateways:-istio-egress/egress-gatewayport:443sniHosts:-example.comroute:-destination:host:example.comport:number:443weight:100exportTo:-'istio-egress'-'team-x'EOFUpdate the virtual service for httpbin.org so that TLS traffic on port443 is passed through the gateway:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:VirtualServicemetadata:name:httpbin-org-through-egress-gatewaynamespace:istio-egressspec:hosts:-httpbin.orggateways:-istio-egress/egress-gateway-meshhttp:-match:-gateways:-meshport:80route:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-mTLSport:number:80weight:100-match:-gateways:-istio-egress/egress-gatewayport:80route:-destination:host:httpbin.orgport:number:80weight:100tls:-match:-gateways:-meshport:443sniHosts:-httpbin.orgroute:-destination:host:istio-egressgateway.istio-egress.svc.cluster.localsubset:target-egress-gateway-TLS-passthroughport:number:443-match:-gateways:-istio-egress/egress-gatewayport:443sniHosts:-httpbin.orgroute:-destination:host:httpbin.orgport:number:443weight:100exportTo:-'istio-egress'-'team-x'-'team-y'EOFAdd an authorization policy that accepts any kind of traffic sent toport 443 of the egress gateway service. The corresponding
targetPortonthe gateway pods is 8443.cat <<EOF | kubectl apply -f -apiVersion:security.istio.io/v1beta1kind:AuthorizationPolicymetadata:name:egress-all-443namespace:istio-egressspec:action:ALLOWrules:-when:-key:destination.portvalues:["8443"]EOFRun
istioctl analyzeto check for configuration errors:${ISTIOCTL}analyze-nistio-egress--revisionREVISIONYou see:
✔ No validation issues found when analyzing namespace: istio-egress.Make a plain HTTP request to example.com from the test application inthe
team-xnamespace:kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comThe request succeeds with a
200 OKresponse.Now make several TLS (HTTPS) requests from the test application in the
team-xnamespace:foriin{1..4}dokubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-s-o/dev/null\-w"%{http_code}\n"\https://example.comdoneYou see 200 responses.
Look at the egress gateway log again:
kubectl-nistio-egresslogs-f$(kubectl-nistio-egressgetpod-listio=egressgateway\-ojsonpath="{.items[0].metadata.name}")istio-proxyYou see log entries similar to the following:
[2020-09-24T18:04:38.608Z] "- - -" 0 - "-" "-" 1363 5539 10 - "-" "-" "-" "-" "93.184.216.34:443" outbound|443||example.com 10.1.4.31:51098 10.1.4.31:8443 10.1.1.15:57030 example.com -The HTTPS request has been treated as TCP traffic and passed through thegateway to the destination host, so no HTTP information is included in thelog.
Using Kubernetes NetworkPolicy as an additional control
There are many scenarios in which an application can bypass a sidecar proxy.You can use KubernetesNetworkPolicy to additionally specify which connectionsworkloads are allowed to make. After a single network policy is applied, allconnections that aren't specifically allowed are denied.
This tutorial only considers egress connections and egress selectors fornetwork policies. If you control ingress with network policies on your ownclusters, then you must create ingress policies to correspond to your egresspolicies. For example, if you allow egress from workloads in theteam-xnamespace to theteam-y namespace, you must also allow ingress to theteam-ynamespace from theteam-x namespace.
Allow workloads and proxies deployed in the
team-xnamespace toconnect toistiodand the egress gateway:cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-control-planenamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":istio-systempodSelector:matchLabels:istio:istiod-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":istio-egresspodSelector:matchLabels:istio:egressgatewayEOFAllow workloads and proxies to query DNS:
cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-dnsnamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":kube-systemports:-port:53protocol:UDP-port:53protocol:TCPEOFAllow workloads and proxies to connect to the IPs that serve GoogleAPIs and services, including Cloud Service Mesh certificate authority:
cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-google-apisnamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-ipBlock:cidr:199.36.153.4/30-ipBlock:cidr:199.36.153.8/30EOFAllow workloads and proxies to connect to the GKEmetadata server:
cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-metadata-servernamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:# For GKE data plane v2-ipBlock:cidr:169.254.169.254/32-to:# For GKE data plane v1-ipBlock:cidr:127.0.0.1/32# Prior to 1.21.0-gke.1000-ipBlock:cidr:169.254.169.252/32# 1.21.0-gke.1000 and laterports:-protocol:TCPport:987-protocol:TCPport:988EOFOptional: Allow workloads and proxies in the
team-xnamespace to makeconnections to each other:cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-same-namespacenamespace:team-xspec:podSelector:{}ingress:-from:-podSelector:{}egress:-to:-podSelector:{}EOFOptional: Allow workloads and proxies in the
team-xnamespace to makeconnections to workloads deployed by a different team:cat <<EOF | kubectl apply -f -apiVersion:networking.k8s.io/v1kind:NetworkPolicymetadata:name:allow-egress-to-team-ynamespace:team-xspec:podSelector:{}policyTypes:-Egressegress:-to:-namespaceSelector:matchLabels:"kubernetes.io/metadata.name":team-yEOFConnections between sidecar proxies persist. Existing connections arenot closed when you apply a new network policy. Restart the workloads inthe team-x namespace to make sure existing connections are closed:
kubectl-nteam-xrolloutrestartdeploymentVerify that you can still make an HTTP request to example.com from thetest application in the
team-xnamespace:kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--curl-Ihttp://example.comThe request succeeds with a
200 OKresponse.
Directly accessing Google APIs using Private Google Access and IAM permissions
Google's APIs and services are exposed using external IP addresses. When podswith VPC-native alias IP addresses make connections to Google APIs by usingPrivate Google Access, thetraffic never leaves Google's network.
When you set up the infrastructure for this tutorial, you enabledPrivate Google Access for the subnet used by GKEpods. To allow access to the IP addresses used by Private Google Access,you created a route, a VPC firewall rule, and a private DNS zone. Thisconfiguration lets pods reach Google APIs directly without sending trafficthrough the egress gateway. You can control which APIs are available to specificKubernetes service accounts (and hence namespaces) by usingWorkload Identity Federation for GKE andIAM. Istio authorization doesn't take effect because the egressgateway is not handling connections to the Google APIs.
Before pods can call Google APIs, you must use IAM to grantpermissions. The cluster you are using for this tutorial is configured to useWorkload Identity Federation for GKE, which allows a Kubernetes service account to act as aGoogle service account.
Create a Google service account for your application to use:
gcloudiamservice-accountscreatesa-test-app-team-xAllow the Kubernetes service account to impersonate the Google serviceaccount:
gcloudiamservice-accountsadd-iam-policy-binding\--roleroles/iam.workloadIdentityUser\--member"serviceAccount:${PROJECT_ID}.svc.id.goog[team-x/test]"\sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.comAnnotate the Kubernetes service account for the test app in the
team-xnamespace with the email address of the Google service account:cat <<EOF | kubectl apply -f -apiVersion:v1kind:ServiceAccountmetadata:annotations:iam.gke.io/gcp-service-account:sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.comname:testnamespace:team-xEOFThe test application pod must be able to access the Google metadataserver (running as a DaemonSet) to obtain temporary credentials for callingGoogle APIs. Create a service entry for the GKEmetadata server:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:metadata-google-internalnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:metadata.google.internalspec:hosts:-metadata.google.internalports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'EOFAlso create a service entry for private.googleapis.com andstorage.googleapis.com:
cat <<EOF | kubectl apply -f -apiVersion:networking.istio.io/v1beta1kind:ServiceEntrymetadata:name:private-googleapis-comnamespace:istio-egresslabels:# Show this service and its telemetry in the Cloud Service Mesh page of the Google Cloud consoleservice.istio.io/canonical-name:googleapis.comspec:hosts:-private.googleapis.com-storage.googleapis.comports:-number:80name:httpprotocol:HTTP-number:443name:tlsprotocol:TLSresolution:DNSlocation:MESH_EXTERNALexportTo:-'istio-egress'-'team-x'EOFVerify that the Kubernetes service account is correctly configured toact as the Google service account:
kubectl-nteam-xexec-it$(kubectl-nteam-xgetpod-lapp=test\-ojsonpath={.items..metadata.name})-ctest--gcloudauthlistYou see the Google service account listed as the active and only identity.
Create a test file in a Cloud Storage bucket:
echo"Hello, World!" >/tmp/hellogcloudstoragebucketscreategs://${PROJECT_ID}-bucketgcloudstoragecp/tmp/hellogs://${PROJECT_ID}-bucket/Grant permission for the service account to list and view files in thebucket:
gcloudstoragebucketsadd-iam-policy-bindinggs://${PROJECT_ID}-bucket/\--member=serviceAccount:sa-test-app-team-x@${PROJECT_ID}.iam.gserviceaccount.com\--role=roles/storage.objectViewerVerify that the test application can access the test bucket:
kubectl-nteam-xexec-it\$(kubectl-nteam-xgetpod-lapp=test-ojsonpath={.items..metadata.name})\-ctest\--gcloudstoragecatgs://${PROJECT_ID}-bucket/helloYou see:
Hello, World!
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
To avoid incurring charges to your Google Cloud account for the resourcesused in this tutorial, complete the steps in the following sections.:
Delete the project
The easiest way to eliminate billing is to delete the project you created forthe tutorial.
What's next
- Read thecompanion best practices guide.
- Consult theGKE hardening guide.
- Learn how toautomate TLS certificate management for Cloud Service Meshingress gateway using CA Service.
- Learn about managing configuration and policy across all of yourinfrastructure withGKE Enterprise Configuration Management.
- For more reference architectures, diagrams, and best practices, explore theCloud Architecture Center.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.