Ingress for internal Application Load Balancers Stay organized with collections Save and categorize content based on your preferences.
This page explains how Ingress for internal Application Load Balancers works inGoogle Kubernetes Engine (GKE). You can also learn how toset up and use Ingress forinternal Application Load Balancers.
In GKE, the internal Application Load Balancer is a proxy-based,regional, Layer 7 load balancer that enables you to run and scale your servicesbehind an internal load balancing IP address. GKEIngress objects support the internal Application Load Balancer natively through the creation ofIngress objects on GKE clusters.
For general information about using Ingress for load balancing inGKE, seeHTTP(S) load balancing with Ingress.
Benefits of using Ingress for internal Application Load Balancers
Using GKE Ingress for internal Application Load Balancers provides thefollowing benefits:
- A highly available, GKE-managed Ingress controller.
- Load balancing for internal, service-to-service communication.
- Container-native load balancing withNetwork Endpoint Groups (NEG).
- Application routing with HTTP and HTTPS support.
- High-fidelity Compute Engine health checks for resilient services.
- Envoy-based proxies that are deployed on-demand to meet traffic capacityneeds.
Support for Google Cloud features
Ingress for internal Application Load Balancers supports a variety of additional features.
- Self-managed SSL Certificatesusing Google Cloud. Only regional certificates are supported for this feature.
- Self-managed SSL Certificates using KubernetesSecrets.
- TheSession AffinityandConnection TimeoutBackendService features. You canconfigure these features usingBackendConfig.
Required networking environment for internal Application Load Balancers
Important: Ingress forinternal Application Load Balancers requires you to useNEGs asbackends. It does not support Instance Groups as backends.The internal Application Load Balancer provides a pool of proxies for your network.The proxies evaluate where each HTTP(S) request should go based on factors such asthe URL map, the BackendService's session affinity, and the balancing mode ofeach backend NEG.
A region's internal Application Load Balancer uses theproxy-only subnetfor that region in your VPC network to assign internal IPaddresses to each proxy created by Google Cloud.
By default, the IP address assigned to a load balancer's forwarding rule comesfrom the node's subnet range assigned by GKE instead of from theproxy-only subnet. You can alsomanually specify an IP addressfor the forwarding rule from any subnet when you create the rule.
The following diagram provides an overview of the traffic flow for aninternal Application Load Balancer, as described in the preceding paragraph.
Here's how the internal Application Load Balancer works:
- A client makes a connection to the IP address and port of the load balancer'sforwarding rule.
- A proxy receives and terminates the client's network connection.
- The proxy establishes a connection to the appropriate endpoint (Pod)in a NEG, as determined by the load balancer's URL map, and backend services.
Each proxy listens on the IP address and port specified by the correspondingload balancer's forwarding rule. The source IP address of each packet sent from a proxy to an endpointis the internal IP address assigned to that proxy from the proxy-only subnet.
HTTPS (TLS) between load balancer and your application
An internal Application Load Balancer acts as a proxy between your clients and yourapplication. Clients can use HTTP or HTTPS to communicate with the loadbalancer proxy. The connection from the load balancer proxy toyour application uses HTTP by default. However, if your application runs in aGKE Pod and can receive HTTPS requests, you canconfigure the load balancer to use HTTPS when it forwards requests to yourapplication.
To configure the protocol used between the load balancer and your application,use thecloud.google.com/app-protocols annotation in your Service manifest.
The following Service manifest specifies two ports. The annotation specifies thatan internal Application Load Balancer should use HTTP when it targets port 80 of the Service,And use HTTPS when it targets port 443 of the Service.
You must use the port'sname field in the annotation. Do not use a different field such astargetPort.
cloud.google.com/app-protocols: '{"": "HTTPS"}'.Editing the port name or annotation after the initial setup might cause downtimefor your applicationsapiVersion:v1kind:Servicemetadata:name:my-serviceannotations:cloud.google.com/app-protocols:'{"my-https-port":"HTTPS","my-http-port":"HTTP"}'spec:type:NodePortselector:app:metricsdepartment:salesports:-name:my-https-portport:443targetPort:8443-name:my-http-portport:80targetPort:50001What's next
- Learn how to deploy a proxy-only subnet.
- Learn about Ingress for external Application Load Balancers.
- Learn how to configure Ingress for internal Application Load Balancers.
- Read an overview of networking in GKE.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.