Ingress configuration Stay organized with collections Save and categorize content based on your preferences.
Overview
This page provides a comprehensive overview of what you can configurethroughKubernetesIngress onGoogle Cloud. The document also compares supported features for Ingress onGoogle Cloud and provides instructions for configuring Ingress using thedefault controller, FrontendConfig parameters, and BackendConfig parameters.
This page is for Networking specialists who design and architect the networkfor their organization and install, configure, and support network equipment. Tolearn more about common roles and example tasks that we reference inGoogle Cloud content, seeCommon GKE user roles and tasks.
Feature comparison
The following table provides a list of supported features for Ingress on Google Cloud.The availability of the feature,General availability (GA) or Betais also indicated.
| Ingress class | External Ingress | Internal Ingress | Multi Cluster Ingress |
|---|---|---|---|
| Ingress controller | Google-hosted Ingress controller | ||
| Google Cloud load balancer type | External HTTP(S) LB | Internal HTTP(S) LB | External HTTP(S) LB |
| Cluster scope | Single cluster | Single cluster | Multi-cluster |
| Load balancer scope | Global | Regional | Global |
| Environment support | GKE | GKE | GKE |
| Shared VPC support | GA | GA | GA |
| Service annotations | |||
| Container-native Load Balancing (NEGs) | GA | GA | GA |
| HTTPS from load balancer to backends | GA | GA | GA |
| HTTP/2 | GA | GA TLS Only | GA |
| Ingress annotations | |||
| Static IP addresses | GA | GA | GA |
| Kubernetes Secrets-based certificates | GA | GA | GA |
| Self-managed SSL certificates | GA | GA | GA |
| Google-managed SSL certificates | GA | GA | |
| FrontendConfig | |||
| SSL policy | GA | GA with Gateway | GA |
| HTTP-to-HTTPS redirect | GA 1.17.13-gke.2600+GA | GA | |
| BackendConfig | |||
| Backend service timeout | GA | GA | GA |
| Cloud CDN | GA | GA | |
| Connection draining timeout | GA | GA | GA |
| Custom load balancer health check configuration | GA | GA | GA |
| Google Cloud Armor security policy | GA 1.19.10-gke.700G | GA | |
| HTTP access logging configuration | GA | GA | GA |
| Identity-Aware Proxy (IAP) | GA | GA | GA |
| Session affinity | GA | GA | GA |
| User-defined request headers | GA | GA | |
| Custom response headers | GA 1.25-gke+G | ||
BThis feature is available in beta starting from the specified version. Featureswithout a version listed are supported for all available GKE versions.
GThis feature is supported as GA starting from the specified version.
Configuring Ingress using the default controller
You cannot manually configure LoadBalancer features using theGoogle Cloud SDK or the Google Cloud console. You must use BackendConfig orFrontendConfig Kubernetes resources.
When creating an Ingress using the default controller, you can choose the typeof load balancer (an external Application Load Balancer or an internal Application Load Balancer) by using anannotation on the Ingress object. You can choose whether GKEcreates zonal NEGs or if it uses instance groups by using an annotation on eachService object.
FrontendConfig and BackendConfigcustom resource definitions (CRDs) allow you to further customize the load balancer. These CRDs allow you to defineadditional load balancerfeatures hierarchically, in amore structured way than annotations. To use Ingress (and these CRDs), you musthave the HTTP load balancing add-on enabled. GKE clusters haveHTTP load balancing enabled by default; you must not disable it.
FrontendConfigs are referenced in an Ingress object and can only be used with external Ingresses. BackendConfigs arereferenced by a Service object. The same CRDs can be referenced by multipleService or Ingress objects for configuration consistency. The FrontendConfig andBackendConfig CRDs share the same lifecycle as their corresponding Ingress andService resources and they are often deployed together.
The following diagram illustrates how:
An annotation on an Ingress or MultiClusterIngress object references aFrontendConfig CRD. The FrontendConfig CRD references a Google CloudSSL Policy.
An annotation on a Service or MultiClusterService object references aBackendConfig CRD. The BackendConfig CRD specifies custom settings forthe corresponding backend service's health check.
Associating FrontendConfig with your Ingress
FrontendConfig can only be used with External Ingresses.
You can associate a FrontendConfig with an Ingress or aMultiClusterIngress.
Ingress
Use thenetworking.gke.io/v1beta1.FrontendConfig annotation:
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:annotations:networking.gke.io/v1beta1.FrontendConfig:"FRONTENDCONFIG_NAME"...ReplaceFRONTENDCONFIG_NAME with the name of yourFrontendConfig.
MultiClusterIngress
Use thenetworking.gke.io/frontend-config annotation:
apiVersion:networking.gke.io/v1kind:MultiClusterIngressmetadata:annotations:networking.gke.io/frontend-config:"FRONTENDCONFIG_NAME"...ReplaceFRONTENDCONFIG_NAME with the name of yourFrontendConfig.
Associating BackendConfig with your Ingress
You can use thecloud.google.com/backend-config orbeta.cloud.google.com/backend-configannotation to specify the name of a BackendConfig.
Same BackendConfig for all Service ports
To use the same BackendConfig for all ports, use thedefault key in theannotation. The Ingress controller uses the same BackendConfig each time itcreates a load balancer backend service to reference one of the Service's ports.
You can use thedefault key for both Ingress and MultiClusterIngressresources.
apiVersion:v1kind:Servicemetadata:annotations:cloud.google.com/backend-config:'{"default":"my-backendconfig"}'...Unique BackendConfig per Service port
For both Ingress and MultiClusterIngress, you can specify a custom BackendConfigfor one or more ports using a key that matches the port's name or number.The Ingress controller uses the specific BackendConfig when itcreates a load balancer backend service for a referenced Service port.
GKE 1.16-gke.3 and later
apiVersion:v1kind:Servicemetadata:annotations:cloud.google.com/backend-config:'{"ports":{"SERVICE_REFERENCE_A":"BACKENDCONFIG_REFERENCE_A","SERVICE_REFERENCE_B":"BACKENDCONFIG_REFERENCE_B"}}'spec:ports:-name:PORT_NAME_1port:PORT_NUMBER_1protocol:TCPtargetPort:50000-name:PORT_NAME_2port:PORT_NUMBER_2protocol:TCPtargetPort:8080...All supported versions
apiVersion:v1kind:Servicemetadata:annotations:cloud.google.com/backend-config:'{"ports":{PORT_NAME_1:"BACKENDCONFIG_REFERENCE_A",PORT_NAME_2:"BACKENDCONFIG_REFERENCE_B"}}'spec:ports:-name:PORT_NAME_1port:PORT_NUMBER_1protocol:TCPtargetPort:50000-name:PORT_NAME_2port:PORT_NUMBER_2protocol:TCPtargetPort:8080...Replace the following:
BACKENDCONFIG_REFERENCE_A: the name of an existingBackendConfig.BACKENDCONFIG_REFERENCE_B: the name of an existingBackendConfig.SERVICE_REFERENCE_A: use the value ofPORT_NUMBER_1orPORT_NAME_1.This is because a Service's BackendConfig annotation can refer to either theport's name (spec.ports[].name) or the port's number (spec.ports[].port).SERVICE_REFERENCE_B: use the value ofPORT_NUMBER_2orPORT_NAME_2.This is because a Service's BackendConfig annotation can refer to either theport's name (spec.ports[].name) or the port's number (spec.ports[].port).
When referring to the Service's port by number, you must use theport value instead of thetargetPort value. The port number used here is only for binding the BackendConfig; it doesnot control the port to which the load balancer sends traffic or health check probes:
When usingcontainer-native loadbalancing,the load balancer sends traffic to an endpoint in a network endpoint group(matching a Pod IP address) on the referenced Service port's
targetPort(which must match acontainerPortfor a serving Pod). Otherwise, the loadbalancer sends traffic to a node's IP address on the referenced Service port'snodePort.When using a BackendConfig to provide a custom load balancer health check,the port number you use for the load balancer's health check can differ fromthe Service's
spec.ports[].portnumber. For details about port numbers forhealth checks, seeCustom health check configuration.
backend.service.port. If the service port is not used inany Ingress object, the BackendConfig is ignored.Configuring Ingress features through FrontendConfig parameters
The following section shows you how to set your FrontendConfig to enable specificIngress features.
SSL policies
SSL policies allow you to specifya set of TLS versions and ciphers that the load balancer uses to terminate HTTPStraffic from clients. You must firstcreate an SSL policyoutside of GKE. Once created, you can reference it in aFrontendConfig CRD.
ThesslPolicy field in the FrontendConfigreferences the name of an SSL policy in the same Google Cloud project as theGKE cluster. It attaches the SSL policy to the target HTTPS proxy,which was created for the external HTTP(S) load balancer by the Ingress. The sameFrontendConfig resource and SSL policycan be referenced by multiple Ingress resources. If a referenced SSL policy ischanged, the change is propagated to the Google Front Ends (GFEs) that poweryour external HTTP(S) load balancer created by the Ingress.
The following FrontendConfig manifest enables an SSL policy namedgke-ingress-ssl-policy:
apiVersion:networking.gke.io/v1beta1kind:FrontendConfigmetadata:name:my-frontend-configspec:sslPolicy:gke-ingress-ssl-policyHTTP to HTTPS redirects
An external HTTP load balancer can redirect unencrypted HTTP requests to anHTTPS load balancer that uses the same IP address. When you create an Ingresswith HTTP to HTTPS redirects enabled, both of these load balancers are createdautomatically. Requests to the external IP address of the Ingress on port 80are automatically redirected to the same external IP address on port 443. Thisfunctionality is built on HTTP to HTTPSredirects provided by Cloud Load Balancing.
To support HTTP to HTTPS redirection, an Ingress must be configured to serveboth HTTP and HTTPS traffic. If either HTTP or HTTPS is disabled, redirectionwill not work.
HTTP to HTTPS redirects are configured using theredirectToHttps field in aFrontendConfig custom resource. Redirects are enabled for the entire Ingressresource so all services referenced by the Ingress will have HTTPS redirectsenabled.
The followingFrontendConfig manifest enables HTTP to HTTPS redirects. Set thespec.redirectToHttps.enabled field totrue to enable HTTPS redirects. Thespec.responseCodeName field is optional. If it's omitted a 301MovedPermanently redirect is used.
apiVersion:networking.gke.io/v1beta1kind:FrontendConfigmetadata:name:my-frontend-configspec:redirectToHttps:enabled:trueresponseCodeName:RESPONSE_CODEReplaceRESPONSE_CODE with one of the following:
MOVED_PERMANENTLY_DEFAULTto return a 301 redirect response code(default ifresponseCodeNameis unspecified).FOUNDto return a 302 redirect response code.SEE_OTHERto return a 303 redirect response code.TEMPORARY_REDIRECTto return a 307 redirect response code.PERMANENT_REDIRECTto return a 308 redirect response code.
When redirects are enabled the Ingress controller creates a load balancer asshown in the following diagram:
To validate that your redirect is working, use acurl command:
curlhttp://IP_ADDRESSReplaceIP_ADDRESS with the IP address of your Ingress.
The response shows the redirect response code that you configured. For examplethe following example is for aFrontendConfig configured with a301: MovedPermanently redirect:
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"><TITLE>301 Moved</TITLE></HEAD><BODY><H1>301 Moved</H1>The document has moved<A HREF="https://35.244.160.59/">here</A>.</BODY></HTML>Configuring Ingress features through BackendConfig parameters
The following sections show you how to set your BackendConfig to enablespecific Ingress features. Changes to a BackendConfig resource are constantlyreconciled, so you do not need to delete and recreate your Ingress to see thatBackendConfig changes are reflected.
For information on BackendConfig limitations, see thelimitations section.
Backend service timeout
You can use a BackendConfig to set abackend service timeoutperiod in seconds. If you do not specify a value, the default value is 30seconds.
The following BackendConfig manifest specifies a timeout of 40 seconds:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:timeoutSec:40Cloud CDN
You can enableCloud CDN using a BackendConfig.
Note: You cannot enable both IAP and Cloud CDN in aBackendConfig. If the BackendConfig doesn't have aniap block,then any existing IAP settings on the backend service areinherited.The following BackendConfig manifest shows all the fields available whenenabling Cloud CDN:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:cdn:enabled:CDN_ENABLEDcachePolicy:includeHost:INCLUDE_HOSTincludeProtocol:INCLUDE_PROTOCOLincludeQueryString:INCLUDE_QUERY_STRING# Specify only one of queryStringBlacklist and queryStringWhitelist.queryStringBlacklist:QUERY_STRING_DENYLISTqueryStringWhitelist:QUERY_STRING_ALLOWLISTcacheMode:CACHE_MODEclientTtl:CLIENT_TTLdefaultTtl:DEFAULT_TTLmaxTtl:MAX_TTLnegativeCaching:NEGATIVE_CACHINGnegativeCachingPolicy:code:NEGATIVE_CACHING_CODEttl:NEGATIVE_CACHING_TTLrequestCoalescing:REQ_COALESCING# Time, in seconds, to continue serving a stale version after request expiry.serveWhileStale:SERVE_WHILE_STALEsignedUrlCacheMaxAgeSec:SIGNED_MAX_AGEsignedUrlKeys:keyName:KEY_NAMEkeyValue:KEY_VALUEsecretName:SECRET_NAMEReplace the following:
CDN_ENABLED: If set totrue, Cloud CDN is enabled for thisIngress backend.INCLUDE_HOST: If set totrue, requests to different hosts arecached separately.INCLUDE_PROTOCOL: If set totrue, HTTP and HTTPS requests arecached separately.INCLUDE_QUERY_STRING: If set totrue, query string parametersare included in the cache key according toqueryStringBlacklistorqueryStringWhitelist. If neither is set, the entire query string is included.If set tofalse, the entire query string is excluded from the cache key.QUERY_STRING_DENYLIST: Specify a string array with the names of querystring parameters to exclude from cache keys. All other parameters are included.You can specifyqueryStringBlacklistorqueryStringWhitelist, but not both.QUERY_STRING_ALLOWLIST: Specify a string array with the names of querystring parameters to include in cache keys. All other parameters are excluded.You canqueryStringBlacklistorqueryStringWhitelist, but not both.
The following fields are only supported in GKE versions1.23.3-gke.900 and later using GKE Ingress. They arenotsupported using Multi Cluster Ingress:
CACHE_MODE: Thecache mode.CLIENT_TTL,DEFAULT_TTL,andMAX_TTL: TTL configuration. For more information,seeUsing TTL settings and overrides.NEGATIVE_CACHING: If set totrue, negative cachingis enabled. For more information, seeUsing negative caching.NEGATIVE_CACHING_CODEandNEGATIVE_CACHING_TTL: Negative caching configuration.For more information, seeUsing negative caching.REQ_COALESCING: If set totrue, collapsing isenabled. For more information, seeRequest collapsing (coalescing).SERVE_WHILE_STALE: Time, in seconds, after theresponse has expired that Cloud CDN continues serving a stale version.For more information, seeServing stale content.SIGNED_MAX_AGE: Maximum time responses can be cached,in seconds. For more information, seeOptionally customizing the maximum cache time.KEY_NAME,KEY_VALUEandSECRET_NAME: Signed URL key configuration.For more information, seeCreating signed request keys.
Expand the following section to see an example that deploys Cloud CDNthrough Ingress and then validates that application content is being cached.
Cloud CDN example
To use a BackendConfig to configure Cloud CDN, perform thefollowing tasks:
- Create a dedicated namespace for this example to run in:
kubectl create namespace cdn-how-to
- Create a Deployment file named
my-deployment.yamlbased on thefollowing Deployment manifest. This manifest declares that you want to run tworeplicas of theingress-gce-echo-amd64web application.apiVersion:apps/v1kind:Deploymentmetadata:namespace:cdn-how-toname:my-deploymentspec:selector:matchLabels:purpose:demonstrate-cdnreplicas:2template:metadata:labels:purpose:demonstrate-cdnspec:containers:-name:echo-amd64image:us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0
- Create the Deployment resource:
kubectl apply -f my-deployment.yaml
- Create a BackendConfig named
my-backendconfig.yamlbased on thefollowing BackendConfig manifest. The manifest specifies a Cloud CDN cachepolicy and declares that Cloud CDN should be enabled:apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:namespace:cdn-how-toname:my-backendconfigspec:cdn:enabled:truecachePolicy:includeHost:trueincludeProtocol:trueincludeQueryString:false
- Create the BackendConfig resource:
kubectl apply -f my-backendconfig.yaml
- Create a file named
my-service.yamlbased on the following Servicemanifest:apiVersion:v1kind:Servicemetadata:namespace:cdn-how-toname:my-servicelabels:purpose:demonstrate-cdn# Use a custom BackendConfig for port 80 in the Service.annotations:cloud.google.com/backend-config:'{"ports":{"80":"my-backendconfig"}}'spec:type:NodePortselector:purpose:demonstrate-cdnports:-port:80protocol:TCPtargetPort:8080
- Create the Service resource:
kubectl apply -f my-service.yaml
- Create a reserved IP address:
gcloud compute addresses create cdn-how-to-address --global
- Create a file named
my-ingress.yamlbased on the followingIngress manifest:apiVersion:networking.k8s.io/v1kind:Ingressmetadata:namespace:cdn-how-toname:my-ingressannotations:kubernetes.io/ingress.global-static-ip-name:"cdn-how-to-address"spec:rules:# Route all incoming HTTP requests to port 80 in the example Service.-http:paths:-path:/*pathType:ImplementationSpecificbackend:service:name:my-serviceport:number:80
- Create the Ingress resource:
kubectl apply -f my-ingress.yaml
- Wait approximately ten minutes for the Kubernetes Ingress controller to configurea Google Cloud load balancer, and then retrieve the IP address used by the loadbalancer's forwarding rules from the Ingress resource.
Output:kubectl describe ingress my-ingress --namespace=cdn-how-to | grep "Address"
Address:ADDRESS
ADDRESSis your Ingress external IP address.
Validate Cloud CDN caching
To validate if your content is being cached, enter thiscurlcommand twice:
curl -vADDRESS/?cache=true
ReplaceADDRESS with your Ingress external IP address.
The output shows the response headers and body. In the response headers, youcan see that the content was cached. TheAge header tells you howmany seconds the content has been cached:
HTTP/1.1 200 OKDate: Fri, 25 Jan 2019 02:34:08 GMTContent-Length: 70Content-Type: text/plain; charset=utf-8Via: 1.1 googleCache-Control: max-age=86400,publicAge: 2716Hello, world!Version: 1.0.0Hostname: my-deployment-7f589cc5bc-l8kr8...
If you find that your content is not being cached, make sure that yourapplication is properly configured to enable caching of content. Formore information, seeCacheable content.
Cleaning up
To prevent unwanted charges incurring on your account, release the static IPaddress that you reserved:
gcloud compute addresses delete cdn-how-to-address --global
Connection draining timeout
You can configureconnection draining timeoutusing a BackendConfig. Connection draining timeout is thetime, in seconds, to wait for connections to drain. For the specified durationof the timeout, existing requests to the removed backend are given time tocomplete. The load balancer does not send new requests to the removed backend.After the timeout duration is reached, all remaining connections to the backendare closed. The timeout duration can be from 0 to 3600 seconds. The defaultvalue is 0, which also disables connection draining.
Warning: Setting a high value fordrainingTimeoutSec (for example, 3600seconds) on an Ingress or Service that uses instance groups will cause delaysproportional to the duration set indrainingTimeoutSec whenupdating other Ingress resources in the cluster. This is because all instancegroup based Ingress resources in a cluster share the same instance group. Ifyou need a high value consider using Services that are backed bynetwork endpoint groups.The following BackendConfig manifest specifies a connection drainingtimeout of 60 seconds:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:connectionDraining:drainingTimeoutSec:60Custom health check configuration
There are a variety of ways that GKE configuresGoogle Cloud load balancer health checks when deploying through Ingress.To learn more about how GKE Ingress deploys health checks, seeIngress health checks.
A BackendConfig allows you to precisely control the load balancer health checksettings.
Note: Ingress does not supportgRPC forcustom health check configurations.You can configure multiple GKE Services to reference the sameBackendConfig as a reusable template. ForhealthCheck parameters, a uniqueGoogle Cloud health check is created for each backend servicecorresponding to each GKE Service.
The following BackendConfig manifest shows all the fields available whenconfiguring a BackendConfig health check:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:healthCheck:# Time, in seconds, between prober checks. Default is `5`.checkIntervalSec:INTERVAL# Probe timeout period. Must be less than or equal to checkIntervalSec.timeoutSec:TIMEOUThealthyThreshold:HEALTH_THRESHOLDunhealthyThreshold:UNHEALTHY_THRESHOLD# Protocol to use. Must be `HTTP`, `HTTP2`, or `HTTPS`.type:PROTOCOL# Path for probe to use. Default is `/`.requestPath:PATH# Port number of the load balancer health check port. Default is `80`.port:PORTReplace the following:
INTERVAL: Specify thecheck-interval,in seconds, for each health check prober. This is the time from the start ofone prober's check to the start of its next check. If you omit this parameter,the Google Cloud default of 5 seconds is used. For additionalimplementation details, seeMultiple probes and frequency.TIMEOUT: Specify the amount of time that Google Cloud waitsfor a response to a probe. The value ofTIMEOUTmust be less thanor equal to theINTERVAL. Units are seconds. Each probe requires anHTTP 200 (OK)response code to be delivered before the probe timeout.HEALTH_THRESHOLDandUNHEALTHY_THRESHOLD: Specify thenumber of sequential connection attempts that must succeed or fail, for atleast one prober, in order to change thehealth statefrom healthy to unhealthy or vice versa. If you omit one of these parameters,Google Cloud uses the default value of 2.PROTOCOL: Specify aprotocolused by probe systems for health checking. TheBackendConfigonly supportscreating health checks using the HTTP, HTTPS, or HTTP2 protocols. For moreinformation, seeSuccess criteria for HTTP, HTTPS, and HTTP/2.You cannot omit this parameter.PATH: For HTTP, HTTPS, or HTTP2 health checks, specify therequest-pathto which the probe system should connect. If you omit this parameter,Google Cloud uses the default of/.PORT: A BackendConfig only supports specifyingthe load balancerhealth check port by using a portnumber.If you omit this parameter, Google Cloud uses the default value80.When usingcontainer native loadbalancing,you should select a port matching a
containerPortof a serving Pod(whether or not thatcontainerPortis referenced by atargetPortofthe Service). Because the load balancer sends probes to the Pod's IPaddress directly, you are not limited tocontainerPorts referenced by aService'stargetPort. Health check probe systems connect to a servingPod's IP address on the port you specify.For instance group backends, you must select a port matching a
nodePortexposed by the Service. Health check probe systems then connect to eachnode on that port.
To set up GKE Ingress with a custom HTTP health check, seeGKE Ingress with custom HTTP health check.
Google Cloud Armor Ingress security policy
Google Cloud Armor security policies helpyou protect your load-balanced applications from web-based attacks. Once you haveconfigured a Google Cloud Armor security policy,you can reference it using a BackendConfig.
Add the name of your security policy to the BackendConfig. The followingBackendConfig manifest specifies a security policy namedexample-security-policy:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:namespace:cloud-armor-how-toname:my-backendconfigspec:securityPolicy:name:"example-security-policy"BackendService resources within Compute Engineto attach your Cloud Armor policies to your GKE-managedbackend services.Two Sources Of Truth
Though configured through GKE, the underlyingCompute EngineBackendService APIs can still be used to directly modifywhich security policy to apply. This creates two sources of truth:GKE and Compute Engine. GKE IngressController's behavior in response to thesecurityPolicy field withinBackendConfig is documented in the table below. To avoid conflict andunexpected behavior, we recommend using the GKEBackendConfigfor the management of which security policy to use.
| BackendConfig field | Value | Behaviour |
|---|---|---|
spec.securityPolicy.name | CloudArmorPolicyName | The GKE Ingress Controller sets Cloud Armor policy namedCloudArmorPolicyName to the load balancer. The GKE Ingress Controlleroverwrites whatever policy that was previously set. |
spec.securityPolicy.name | Empty string ("") | The GKE Ingress Controllerremoves any configured Cloud Armor policy from the load balancer. |
spec.securityPolicy | nil | The GKE Ingress Controller uses the configuration set on the BackendService object configured through the Compute Engine API using the Google Cloud console, or gcloud CLI, or Terraform. |
To set up GKE Ingress with Google Cloud Armor protection, seeGoogle Cloud Armor enabled Ingress.
HTTP access logging
Note: By default, GKE enables access logging forall ingress types (including Multi Cluster Ingress). Access logging setting for GKE Ingress was not supported for GKE versions prior to 1.16.8-gke.10. For GKE versions 1.16.8-gke.10 or later, you can configure access logging setting for GKE Ingress through theBackendConfig.Ingress can log all HTTP requests from clients toCloud Logging. You can enable and disableaccess loggingusing BackendConfig along with setting the access logging sampling rate.
To configure access logging, use thelogging field in your BackendConfig. Iflogging is omitted, access logging takes the default behavior. This isdependent on the GKE version.
You can configure the following fields:
enable: If set totrue, access logging will be enabled for this Ingress andlogs is available in Cloud Logging. Otherwise, access logging isdisabled for this Ingress.sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets arelogged and 1.0 means 100% of packets are logged. This field is only relevant ifenableis set totrue.sampleRateis an optional field, but if it'sconfigured thenenable: truemust also be set or else it is interpreted asenable: false.
The following BackendConfig manifest enables access logging and sets thesample rate to 50% of the HTTP requests for a given Ingress resource:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:logging:enable:truesampleRate:0.5Identity-Aware Proxy
To configure the BackendConfig forIdentity-Aware Proxy IAP,you need to specify theenabled andsecretName values to youriap blockin your BackendConfig. To specify these values, ensure thatGKE service agent has thecompute.backendServices.update permission.
iap block,then any existing IAP settings on the backend service areinherited.The following BackendConfig manifest enables Identity-Aware Proxy:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:iap:enabled:trueoauthclientCredentials:secretName:my-secretEnable IAP with the Google-managed OAuth client
Starting in GKE 1.29.4-gke.1043000, IAP can beconfigured to use the Google-managed OAuth client using a BackendConfig. To decidewhether to use the Google-managed OAuth client or a custom OAuth client, seeWhen to use a custom OAuth configurationin the IAP documentation.
To enable IAP with the Google-managed OAuth client,do not provide the OAuthCredentials in the BackendConfig. For users who alreadyconfigured IAP using OAuthCredentials, there is no migration path to switch tousing the Google-managed OAuth client: you must recreate the Backend (remove theService from the Ingress and re-attach). We suggest performing this operationduring a maintenance window as this will cause downtime. The opposite migrationpath, switching from the Google-managed OAuth client to OAuthCredentials is possible.
The following BackendConfig manifest enables Identity-Aware Proxy with theGoogle-managed OAuth client:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:iap:enabled:trueFor full instructions,seeEnabling IAP for GKEin the IAP documentation.
Note: If you are using IAP on GKE cluster versions1.29.2-gke.1521000,1.29.3-gke.1093000,1.29.3-gke.1093002, or1.29.3-gke.1282000, to ensure no outage, upgrade to GKE cluster version1.29.4-gke.1043000.Identity-Aware Proxy with internal Ingress
To configure internal Ingress for IAP, you must use thePremium Tier. If you do not usethe Premium Tier, you cannot view or create internal Application Load Balancers withIdentity-Aware Proxy. You must also have aChrome Enterprise Premiumsubscription to use internal Ingress for IAP.
To set up secure GKE Ingress with Identity-Aware Proxy based authentication,see example,IAP enabled ingress.
Session affinity
You can use a BackendConfig to setsession affinityto client IP or generated cookie.
Important: Use aVPC-native cluster ifyou want to configure session affinity. Session affinity is useful only forServices that are backed bynetwork endpoint groups,and network endpoint groups require a VPC-native cluster.Client IP affinity
To use a BackendConfig to setclient IP affinity,setaffinityType to"CLIENT_IP", as in the following example:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:sessionAffinity:affinityType:"CLIENT_IP"Generated cookie affinity
To use a BackendConfig to setgenerated cookie affinity, setaffinityType toGENERATED_COOKIE in your BackendConfig manifest. You can also useaffinityCookieTtlSec to set the time period for the cookie to remain active.
The following manifest sets the affinity type to generated cookie and gives thecookies a TTL of 50 seconds:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:sessionAffinity:affinityType:"GENERATED_COOKIE"affinityCookieTtlSec:50User-defined request headers
You can use a BackendConfig to configureuser-defined request headers.The load balancer adds the headers you specify to the requests it forwards tothe backends.
The load balancer adds custom request headers only to the client requests, notto the health check probes. If your backend requires a specific header forauthorization that is missing from the health check packet, the health checkmight fail.
To enable user-defined request headers, you specify a list of headers in thecustomRequestHeaders property of the BackendConfig resource. Specify eachheader as aheader-name:header-value string.
The following BackendConfig manifest adds three request headers:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:customRequestHeaders:headers:-"X-Client-Region:{client_region}"-"X-Client-City:{client_city}"-"X-Client-CityLatLong:{client_city_lat_long}"Custom response headers
Note: Custom response headers are not supported for internal Application Load Balancers. To configure custom headers on internal traffic, use GKE Gateway instead. For more information, seeConfigure custom request and response headers.To enable custom response headers, you specify a list of headers in thecustomResponseHeaders property of the BackendConfig resource. Specify eachheader as aheader-name:header-value string.
Custom response headers are available only in GKEclusters version 1.25 and later.
The following BackendConfig manifest is an example to add anHTTP Strict Transport Security (HSTS)response header for an external Ingress:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:customResponseHeaders:headers:-"Strict-Transport-Security:max-age=28800;includeSubDomains"Exercise: Setting Ingress timeouts using a backend service
The following exercise shows you the steps required for configuring timeoutsand connection draining with an Ingress with a BackendConfig resource.
To configure the backend properties of an Ingress, complete the followingtasks:
- Create a Deployment.
- Create a BackendConfig.
- Create a Service, and associate one of its ports with the BackendConfig.
- Create an Ingress, and associate the Ingress with the (Service, port) pair.
- Validate the properties of the backend service.
- Clean up.
Creating a Deployment
Before you create a BackendConfig and a Service, you need to create aDeployment.
The following example manifest is for a Deployment namedmy-deployment.yaml:
# my-deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:my-deploymentspec:selector:matchLabels:purpose:bsc-config-demoreplicas:2template:metadata:labels:purpose:bsc-config-demospec:containers:-name:hello-app-containerimage:us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0Create the Deployment by running the following command:
kubectlapply-fmy-deployment.yamlCreating a BackendConfig
Use your BackendConfig to specify the Ingress features you want to use.
This BackendConfig manifest, namedmy-backendconfig.yaml, specifies:
- A timeout of 40 seconds.
- A connection draining timeout of 60 seconds.
# my-backendconfig.yamlapiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:timeoutSec:40connectionDraining:drainingTimeoutSec:60Create the BackendConfig by running the following command:
kubectlapply-fmy-backendconfig.yamlCreating a Service
A BackendConfig corresponds to a single Service-Port combination, even if aService has multiple ports. Each port can be associated with a singleBackendConfig CRD. If a Service port is referenced by an Ingress, and if theService port is associated with a BackendConfig, then the HTTP(S) load balancingbackend service takes part of its configuration from the BackendConfig.
The following is an example Service manifest namedmy-service.yaml:
# my-service.yamlapiVersion:v1kind:Servicemetadata:name:my-service# Associate the Service with Pods that have the same label.labels:purpose:bsc-config-demoannotations:# Associate TCP port 80 with a BackendConfig.cloud.google.com/backend-config:'{"ports":{"80":"my-backendconfig"}}'cloud.google.com/neg:'{"ingress":true}'spec:type:ClusterIPselector:purpose:bsc-config-demoports:# Forward requests from port 80 in the Service to port 8080 in a member Pod.-port:80protocol:TCPtargetPort:8080Thecloud.google.com/backend-config annotation specifies a mapping between portsand BackendConfig objects. Inmy-service.yaml:
- Any Pod that has the label
purpose: bsc-config-demois a member ofthe Service. - TCP port 80 of the Service is associated with a BackendConfig named
my-backendconfig. Thecloud.google.com/backend-configannotation specifies this. - A request sent to port 80 of the Service is forwarded to one of themember Pods on port 8080.
To create the Service, run the following command:
kubectlapply-fmy-service.yamlCreating an Ingress
The following is an Ingress manifest namedmy-ingress.yaml. Inthis example, incoming requests are routed to port 80 of the Service namedmy-service.
apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:my-ingressspec:rules:# Route all HTTP requests to port 80 in a Service.-http:paths:-path:/*pathType:ImplementationSpecificbackend:service:name:my-serviceport:number:80To create the Ingress, run the following command:
kubectlapply-fmy-ingress.yamlWait a few minutes for the Ingress controller to configure an external Application Load Balancerand an associated backend service. Once this is complete, you haveconfigured your Ingress to use a timeout of 40 seconds and a connection drainingtimeout of 60 seconds.
Validating backend service properties
You can validate that the correct load balancer settings have been appliedthrough your BackendConfig. To do this, identify thebackend service that Ingress has deployedand inspect its settings to validate that they match the Deployment manifests.
First, describe themy-ingress resource and filter for the annotation thatlists the backend services associated with the Ingress. For example:
kubectldescribeingressmy-ingress|grepingress.kubernetes.io/backendsYou should see output similar to the following:
ingress.kubernetes.io/backends:'{"k8s1-27fde173-default-my-service-80-8d4ca500":"HEALTHY","k8s1-27fde173-kube-system-default-http-backend-80-18dfe76c":"HEALTHY"}The output provides information about your backend services. For example, this annotationcontains two backend services:
"k8s1-27fde173-default-my-service-80-8d4ca500":"HEALTHY"provides informationabout the backend service associated with themy-serviceKubernetes Service.k8s1-27fde173is a hash used to describe the cluster.defaultis the Kubernetes namespace.HEALTHYindicates that the backend is healthy.
"k8s1-27fde173-kube-system-default-http-backend-80-18dfe76c":"HEALTHY"provides information about the backend service associated with thedefault backend (404-server).k8s1-27fde173is a hash used to describe the cluster.kube-systemis the namespace.default-http-backendis the Kubernetes Service name.80is the host port.HEALTHYindicates that the backend is healthy.
Next, inspect the backend service associated withmy-service usinggcloud.Filter for"drainingTimeoutSec" and"timeoutSec" to confirm that they'vebeen set in the Google Cloud Load Balancer control plane. For example:
# Optionally, set a variableexportBES=k8s1-27fde173-default-my-service-80-8d4ca500# Filter for drainingTimeoutSec and timeoutSecgcloudcomputebackend-servicesdescribe${BES}--global|grep-e"drainingTimeoutSec"-e"timeoutSec"Output:
drainingTimeoutSec:60timeoutSec:40SeeingdrainingTimeoutSec andtimeoutSec in the output confirms that their valueswere correctly set through the BackendConfig.
Cleaning up
To prevent unwanted charges incurring on your account, delete the Kubernetesobjects that you created for this exercise:
kubectldeleteingressmy-ingresskubectldeleteservicemy-servicekubectldeletebackendconfigmy-backendconfigkubectldeletedeploymentmy-deploymentBackendConfig limitations
BackendConfigs have the following limitations:
Only one (Service, port) pair can consume only one BackendConfig, even if multipleIngress objects reference the (Service, port). This means all Ingress objectsthat reference the same (Service, port) must use the same configuration forCloud Armor, IAP, and Cloud CDN.
IAP and Cloud CDN cannot be enabled for the same HTTP(S)Load Balancing backend service. This means that you cannot configure bothIAP and Cloud CDN in the same BackendConfig.
You must use
kubectl1.7 or later to interact with BackendConfig.
Removing the configuration specified in a FrontendConfig or BackendConfig
To revoke an Ingress feature, you must explicitly disable the featureconfiguration in the FrontendConfig or BackendConfig CRD. The Ingresscontroller only reconciles configurations specified in these CRDs.
To clear or disable a previously enabled configuration, set the field's valueto an empty string ("") or to a Boolean value offalse, depending on thefield type.
The following BackendConfig manifest disables a Google Cloud Armor securitypolicy and Cloud CDN:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backendconfigspec:cdn:enabled:falsesecurityPolicy:name:""Deleting a FrontendConfig or BackendConfig
FrontendConfig
To delete a FrontendConfig, follow these steps:
Remove the FrontendConfig's name from the
networking.gke.io/v1beta1.FrontendConfigannotation in the Ingressmanifest.Apply the changed Ingress manifest to your cluster. For example, use
kubectl apply.Delete the FrontendConfig. For example, use
kubectl delete frontendconfig config my-frontendconfig.
networking.gke.io/v1beta1.FrontendConfig annotation and apply the updatedIngress manifest. If you do, the configuration does not get removed from theHTTP(S) Load Balancing backend service. Also, you will see errors as Kubernetesevents on the Ingress object.BackendConfig
To delete a BackedConfig, follow these steps:
Remove the BackendConfig's name from the
cloud.google.com/backend-configannotation in the Service manifest.Apply the changed Service manifest to your cluster. For example, use
kubectl apply.Delete the BackendConfig. For example, use
kubectl delete backendconfig my-backendconfig.
cloud.google.com/backend-config annotation and apply the updatedService manifest. If you do, the configuration does not get removed from theHTTP(S) Load Balancing backend service. Also, you will see errors as Kubernetesevents on the Ingress object.Troubleshooting
You can detect common misconfigurations using theIngress diagnostic tool. You should also ensure thatanyhealth checks are configured correctly.
BackendConfig not found
This error occurs when a BackendConfig for a Service port is specified in theService annotation, but the actual BackendConfig resource couldn't be found.
To evaluate a Kubernetes event, run the following command:
kubectlgeteventThe following example output indicates your BackendConfig was not found:
KIND ... SOURCEIngress ... loadbalancer-controllerMESSAGEError during sync: error getting BackendConfig for port 80 on service "default/my-service":no BackendConfig for service port existsTo resolve this issue, ensure you have not created the BackendConfig resourcein the wrong namespace or misspelled its reference in the Service annotation.
Ingress security policy not found
After the Ingress object is created, if the security policy isn't properlyassociated with the LoadBalancer Service, evaluate the Kubernetes eventto see if there is a configuration mistake. If your BackendConfig specifiesa security policy that does not exist, a warning event is periodically emitted.
To evaluate a Kubernetes event, run the following command:
kubectlgeteventThe following example output indicates your security policy was not found:
KIND ... SOURCEIngress ... loadbalancer-controllerMESSAGEError during sync: The given security policy "my-policy" does not exist.To resolve this issue, specify the correct security policy name in yourBackendConfig.
Addressing 500 series errors with NEGs during workload scaling in GKE
Symptom:
When you use GKE provisioned NEGs for load balancing, you mightexperience 502 or 503 errors for the services during the workload scale down. 502errors occur when Pods are terminated before existing connections close,while the 503 errors occur when traffic is directed to deleted Pods.
This issue can affect clusters if you are using GKE managed loadbalancing products that use NEGs, including Gateway, Ingress, and standalone NEGs.If you frequently scale your workloads, your cluster is at a higher risk of being affected.
Diagnosis:
Removing a Pod in Kubernetes without draining its endpoint and removing it fromthe NEG first leads to 500 series errors. To avoid issues during Podtermination, you must consider the order of operations. The following imagesdisplay scenarios whenBackendService Drain Timeout is unset andBackendService Drain Timeout is set withBackendConfig.
Scenario 1:BackendService Drain Timeout is unset.
The following image displays a scenario where theBackendService Drain Timeout isunset.

Scenario 2:BackendService Drain Timeout is set.
The following image displays a scenario where theBackendService Drain Timeout is set.

The exact time the 500 series errors occur depends on the following factors:
NEG API detach latency: The NEG API detach latency represents the currenttime taken for the detach operation to finalize in Google Cloud. This isinfluenced by a variety of factors outside Kubernetes, including the type ofload balancer and the specific zone.
Drain latency: Drain latency is the time taken for the load balancerto start directing traffic away from a particular part of your system. Afterdrain is initiated, the load balancer stops sending new requests to theendpoint, however there is still a latency in triggering drain (drainlatency) which can cause temporary 503 errors if the Pod no longer exists.
Health check configuration: More sensitive health check thresholds mitigatethe duration of 503 errors as it can signal the load balancer to stopsending requests to endpoints even if the detach operation has not finished.
Termination grace period: The termination grace period determines themaximum amount of time a Pod is given to exit. However, a Pod can exit beforethe termination grace period completes. If a Pod takes longer than this period, the Pod isforced to exit at the end of this period. This is a setting on the Pod andneeds to be configured in the workload definition.
Potential resolution:
To prevent those 5XX errors, apply the following settings. The timeout valuesare suggestive and you might need to adjust them for your specific application.The following section guides you through thecustomization process.
The following image displays how to keep the Pod alive with apreStop hook:

To avoid 500 series errors, perform the following steps:
Set the
Note: If your average request time is more than 30 seconds, see theCustomize timeouts to customize theBackendService Drain Timeoutfor your service to 1 minute.BackendService Drain Timeout.For Ingress Users, seeset the timeout on theBackendConfig.
For Gateway Users, seeconfigure the timeout on theGCPBackendPolicy.
For those managing their BackendServices directly when using StandaloneNEGs, seeset the timeout directly on the Backend Service.
Extend the
terminationGracePeriodon the Pod.Set the
terminationGracePeriodSecondson the Pod to 3.5 minutes. Whencombined with the recommended settings, this allows your Pods a 30 to 45 secondwindow for a graceful shutdown after the Pod's endpoint has been removedfrom the NEG. If you require more time for the graceful shutdown, you canextend the grace period or follow the instructions mentioned in theCustomize timeouts section.The following Pod manifest specifies a connection draining timeout of 210 seconds (3.5 minutes):
spec:terminationGracePeriodSeconds:210containers:-name:my-app......Apply a
preStophook to all containers.Apply a
Note: Apply thepreStophook that will ensure the Pod is alive for 120 secondslonger while the Pod's endpoint is drained in the load balancer and theendpoint is removed from the NEG.preStophook to every container in your Pod.Containers without the hook will exit as soon as the Pod isdeleted. If you use a tool that injects containers, ensure that the injectedcontainers will have the requiredpreStophook.spec:containers:-name:my-app...lifecycle:preStop:exec:command:["/bin/sh","-c","sleep120s"]...
Customize timeouts
To ensure Pod continuity and prevent 500 series errors, the Pod must be aliveuntil the endpoint is removed from the NEG. Specifically to prevent 502 and 503errors, consider implementing a combination of timeouts and apreStop hook.
To keep the Pod alive longer during the shutdown process, add apreStop hookto the Pod. Run thepreStop hook before a Pod is signaled to exit, so thepreStop hook can be used to keep the Pod alive until its correspondingendpoint is removed from the NEG.
To extend the duration that a Pod remains active during the shutdown process,insert apreStop hook into the Pod configuration as follows:
spec:containers:-name:my-app...lifecycle:preStop:exec:command:["/bin/sh","-c","sleep<latencytime>"]preStop hook to eachcontainer to keep all containers active. Only containers with the hook run, andthe rest stop. If you're using a tool that adds containers, make sure thoseadded containers also have the necessarypreStop hook.You can configure timeouts and related settings to manage the graceful shutdownof Pods during workload scale downs. You can adjust timeouts based on specificuse cases. We recommend that you start with longer timeouts and reduce theduration as necessary. You can customize the timeouts by configuringtimeout-related parameters and thepreStop hook in the following ways:
Backend Service Drain Timeout
TheBackend Service Drain Timeout parameter is unset by default and has noeffect. If you set theBackend Service Drain Timeout parameter and activateit, the load balancer stops routing new requests to the endpoint and waitsthe timeout before terminating existing connections.
You canset theBackend Service Drain Timeout parameter by using theBackendConfig with Ingress, theGCPBackendPolicy with Gateway or manually ontheBackendService with standalone NEGs. The timeout should be 1.5 to 2 timeslonger than the time it takes to process a request. This ensures if a requestcame in right before the drain was initiated, it will complete before thetimeout completes. Setting theBackend Service Drain Timeout parameter to avalue greater than 0 helps mitigate 503 errors because no new requests are sentto endpoints scheduled for removal. For this timeout to be effective, you mustuse it with thepreStop hook to ensure that the Pod remainsactive while the drain occurs. Without this combination, existing requests thatdidn't complete will receive a 502 error.
preStop hook time
ThepreStop hook must delay Pod shut down sufficiently for both drainlatency and backend service drain timeout to complete, ensuring properconnection drainage and endpoint removal from the NEG before the Pod is shutdown.
For optimal results, ensure yourpreStop hook execution time is greater thanor equal to the sum of theBackend Service Drain Timeout and drain latency.
Calculate your idealpreStop hook execution time with the following formula:
preStop hook execution time >=BACKEND_SERVICE_DRAIN_TIMEOUT +DRAIN_LATENCYReplace the following:
BACKEND_SERVICE_DRAIN_TIMEOUT: the time that you configuredfor theBackend Service Drain Timeout.DRAIN_LATENCY: an estimated time for drain latency. Werecommend that you use one minute as your estimate.
If 500 errors persist, estimate the total occurrence duration and add doublethat time to the estimated drain latency. This ensures that your Pod has enoughtime to drain gracefully before being removed from the service. You can adjustthis value if it's too long for your specific use case.
Alternatively, you can estimate the timing by examining the deletiontimestamp from the Pod and the timestamp when the endpoint was removedfrom the NEG in the Cloud Audit Logs.
Termination Grace Period parameter
You must configure theterminationGracePeriod parameter to allow sufficienttime for thepreStop hook to finish and for the Pod to complete a gracefulshutdown.
By default, when not explicitly set, theterminationGracePeriod is 30 seconds.You can calculate the optimalterminationGracePeriod using the formula:
terminationGracePeriod >= preStop hook time + Pod shutdown time
To defineterminationGracePeriod within the Pod's configuration as follows:
spec:terminationGracePeriodSeconds:<terminationGracePeriod>containers:-name:my-app......NEG not found when creating an Internal Ingress resource
The following error might occur when you create an internal Ingress inGKE:
Error syncing: error running backend syncing routine: googleapi: Error 404: The resource 'projects/PROJECT_ID/zones/ZONE/networkEndpointGroups/NEG' was not found, notFoundThis error occurs because Ingress for internal Application Load Balancers requiresNetwork Endpoint Groups (NEGs) as backends.
In Shared VPC environments or clusters with Network Policies enabled,add the annotationcloud.google.com/neg: '{"ingress": true}' to the Servicemanifest.
504 Gateway Timeout: upstream request timeout
The following error might occur when you access a Service from an internalIngress in GKE:
HTTP/1.1 504 Gateway Timeoutcontent-length: 24content-type: text/plainupsteam request timeoutThis error occurs because traffic sent to internal Application Load Balancers are proxied byenvoy proxies in the proxy-onlysubnet range.
To allow traffic from the proxy-only subnet range,create a firewall ruleon thetargetPort of the Service.
Error 400: Invalid value for field 'resource.target'
The following error might occur when you access a Service from an internalIngress in GKE:
Error syncing:LB_NAME does not exist: googleapi: Error 400: Invalid value for field 'resource.target': 'https://www.googleapis.com/compute/v1/projects/PROJECT_NAME/regions/REGION_NAME/targetHttpProxies/LB_NAME. A reserved and active subnetwork is required in the same region and VPC as the forwarding rule.To resolve this issue, create aproxy-only subnet.
Error during sync: error running load balancer syncing routine: loadbalancer does not exist
One of the following errors might occur when the GKE controlplane upgrades or when you modify an Ingress object:
"Error during sync: error running load balancer syncing routine: loadbalancerINGRESS_NAME does not exist: invalid ingress frontend configuration, pleasecheck your usage of the 'kubernetes.io/ingress.allow-http' annotation."Or:
Error during sync: error running load balancer syncing routine: loadbalancer LOAD_BALANCER_NAME does not exist:googleapi: Error 400: Invalid value for field 'resource.IPAddress':'INGRESS_VIP'. Specified IP address is in-use and would result in a conflict., invalidTo resolve these issues, try the following steps:
- Add the
hostsfield in thetlssection of the Ingress manifest, thendelete the Ingress. Wait five minutes for GKE to delete theunused Ingress resources. Then, recreate the Ingress. For more information,seeThe hosts field of an Ingress object. - Revert the changes you made to the Ingress. Then, add a certificate usinganannotation or Kubernetes Secret.
Known issues
Cannot enable HTTPS Redirects with the V1 Ingress naming scheme
You cannot enable HTTPS redirects on GKE Ingress resourcescreated on GKE versions 1.16.8-gke.12 and earlier. Youmust recreate the Ingress before you can enable HTTPS redirects, oran error event is created and the Ingress does not sync.
The error event message is similar to the following:
Error syncing: error running load balancer syncing routine: loadbalancer lb-name does not exist: ensureRedirectUrlMap() = error: cannot enable HTTPS Redirects with the V1 Ingress naming scheme. Please recreate your ingress to use the newest naming scheme.Google Cloud Armor security policy fields removed from BackendConfig
There is a known issue where updating a BackendConfig resource using thev1beta1API removes an active Cloud Armor security policy from its Service.This issue affects the following GKE versions:
- 1.18.19-gke.1400 to 1.18.20-gke.5099
- 1.19.10-gke.700 to 1.19.14-gke.299
- 1.20.6-gke.700 to 1.20.9-gke.899
If you do not configure Cloud Armor on your Ingress resources via theBackendConfig then this issue does not affect your clusters.
For GKE clusters which do configure Cloud Armor throughthe BackendConfig, it isstrongly recommended to only update BackendConfig resources using thev1 API. Applying a BackendConfig to your cluster usingv1beta1BackendConfig resources will remove your Cloud Armor security policyfrom the Service it is referencing.
To mitigate this issue, only make updates to your BackendConfig using thev1BackendConfig API. Thev1 BackendConfig supports all the same fields asv1beta1and makes no breaking changes so the API field can be updated transparently.Replace theapiVersion field of any active BackendConfigmanifests withcloud.google.com/v1 and do not usecloud.google.com/v1beta1.The following sample manifest describes a BackendConfig resource that uses thev1API:
apiVersion:cloud.google.com/v1kind:BackendConfigmetadata:name:my-backend-configspec:securityPolicy:name:"ca-how-to-security-policy"If you have CI/CD systems or tools which regularly update BackendConfigresources, ensure that you are using thecloud.google.com/v1 API group inthose systems.
If your BackendConfig has already been updated with thev1beta1 API,your Cloud Armor security policy might have been removed.You can determine if this has happened by running the following command:
kubectlgetbackendconfigs-A-ojson|jq-r'.items[] | select(.spec.securityPolicy == {}) | .metadata | "\(.namespace)/\(.name)"'If the response returns output, then your cluster is impacted by the issue.The output of this command returns a list of BackendConfig resources(<namespace>/<name>) that are affected by the issue. If the output is empty,then your BackendConfig has not been updated using thev1beta1 API since theissue has been introduced. Any future updates to your BackendConfig should onlyusev1.
If your Cloud Armor security policy was removed, you can determine whenit was removed using the following Logging query:
resource.type="gce_backend_service"protoPayload.methodName="v1.compute.backendServices.setSecurityPolicy"protoPayload.authenticationInfo.principalEmail:"container-engine-robot.iam.gserviceaccount.com"protoPayload.response.status = "RUNNING"NOT protoPayload.authorizationInfo.permission:"compute.securityPolicies.use"If any of your clusters have been impacted, then this can be corrected by pushingan update to your BackendConfig resource that uses thev1 API.
Upgrade your GKE control planeto one of the following updated versions that patches this issue and allowsv1beta1 BackendConfig resources to be used safely:
- 1.18.20-gke.5100 and later
- 1.19.14-gke.300 and later
- 1.20.9-gke.900 and later
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.