Cloud Load Balancing overview Stay organized with collections Save and categorize content based on your preferences.
A load balancer distributes user traffic across multiple instances of yourapplications. By spreading the load, load balancing reduces the risk that yourapplications experience performance issues. Google's Cloud Load Balancingis built on reliable, high-performing technologies such as Maglev, Andromeda,Google Front Ends, and Envoy—the same technologies that power Google's ownproducts.
Cloud Load Balancing offers a comprehensive portfolio ofapplication and network load balancers. Use our global proxy load balancers todistribute millions of requests per second among backends in multiple regionswith our Google Front End fleet in over 80 distinct locationsworldwide—all with a single, anycast IP address. Implement strongjurisdictional control with our regional proxy load balancers, keeping yourbackends and proxies in a region of your choice without worrying about TLS/SSLoffload. Use our passthrough load balancers to quickly route multiple protocolsto backends with the high performance of direct server return (DSR).
Key features of Cloud Load Balancing
Cloud Load Balancing offers the following load balancer features:
Single anycast IP address. With Cloud Load Balancing, a single anycast IPaddress is the frontend for all of your backend instances in regions aroundthe world. It provides cross-region load balancing, including automaticmulti-region failover, which moves traffic to failover backends if yourprimary backends become unhealthy.Cloud Load Balancing reacts instantaneously to changes in users, traffic,network, backend health, and other related conditions.
Seamless autoscaling. Cloud Load Balancing can scale as your users andtraffic grow, including easily handling huge, unexpected, and instantaneousspikes by diverting traffic to other regions in the world that can taketraffic. Autoscaling does not require pre-warming: you can scale from zero tofull traffic in a matter of seconds.
Software-defined load balancing. Cloud Load Balancing is a fullydistributed, software-defined, managed service for all your traffic. It is notan instance-based or device-based solution, so you won't be locked into aphysical load-balancing infrastructure or face the high availability, scale,and management challenges inherent in instance-based load balancers.
Layer 4 and Layer 7 load balancing. Use Layer 4-based load balancing todirect traffic based on data from network and transport layer protocols suchasTCP, UDP, ESP, GRE, ICMP, and ICMPv6.Use Layer 7-based load balancing to add request routing decisions based onattributes, such as the HTTP header and the uniform resource identifier.
External and internal load balancing. Defines whether the load balancercan be used for external or internal access. You can use anexternal loadbalancer when your clients need to reach your application from the internet.You can use aninternal load balancer when your clients are inside ofGoogle Cloud. To learn more, seeexternal versus internal loadbalancing.
Global and regional load balancing. Defines the scope of the loadbalancer. Aglobal load balancer supports backends in multiple regions,whereas aregional load balancer supports backends in a single region. Eventhough the IP address of a regional load balancer is located in one region, aregional load balancer is globally accessible. You can distribute yourbackends in single or multiple regions to terminate connectionsclose to your users and to meet your high availability requirements. To learnmore, seeglobal versus regional load balancing.
Routing of traffic in Premium Tier and Standard Tier. The load balancingservices in Google Cloud come in different flavors depending on the networktier you choose, that is, Premium Tier or Standard Tier, with the former beingmore expensive than the latter. The Premium Tier leverages Google'shigh-quality global backbone whereas the Standard Tier uses the publicinternet to route traffic across the network. The network tier you choosedepends on whether you prioritize cost or performance for your enterpriseworkload. Some load balancing services are only available in Premium Tier andnot the Standard Tier. To learn more, seePremium versus Standard NetworkService Tiers.
Advanced feature support. Cloud Load Balancing supports features suchas IPv6 load balancing,source IP-based trafficsteering,weighted load balancing,WebSockets, user-defined request headers, and protocol forwarding for privatevirtual IP addresses (VIPs).
It also includes the following integrations:
- Integration withCloud CDN for cached content delivery.Cloud CDN is supported with the global external Application Load Balancer and theclassic Application Load Balancer.
- Integration withGoogle Cloud Armor to protect your infrastructurefrom distributed denial-of-service (DDoS) attacks and other targetedapplication attacks. Always-on DDoS protection is available for theglobal external Application Load Balancer, the classic Application Load Balancer, the external proxy Network Load Balancer,and the external passthrough Network Load Balancer.Additionally, Cloud Armor supportsadvanced networkDDoS protection only for external passthrough Network Load Balancers. For more information, seeConfigure advanced network DDoSprotection.
Types of Google Cloud load balancers
Cloud Load Balancing offers two types of load balancers:Application Load Balancers and Network Load Balancers. You'd choose anApplication Load Balancer when you need a Layer 7 load balancer for yourapplications with HTTP(S) traffic. You'd choose a Network Load Balancer when youneed a Layer 4 load balancer that supports TLS offloading (with a proxy loadbalancer) or you need support for IP protocols such as UDP, ESP, and ICMP(with a passthrough load balancer).
The following table provides a high-level overview of the different types ofGoogle Cloud load balancers categorized by the OSI layer on which they operateand whether they are used for external or internal access.
| lan Cloud Load Balancing | External (Accepts internet traffic) | Internal (Accepts internal Google Cloud traffic) |
|---|---|---|
| Application Load Balancers HTTPS Layer 7 load balancing |
|
|
| Network Load Balancers TCP/SSL/Other Layer 4 load balancing | ||
| Proxy Network Load Balancers | ||
|
| |
| Passthrough Network Load Balancers | ||
|
| |
Application Load Balancers
Application Load Balancers are proxy-based Layer 7 load balancers that enableyou to run and scale your services behind an anycast IP address. TheApplication Load Balancer distributes HTTP and HTTPS traffic to backends hosted ona variety of Google Cloud platforms—such as Compute Engine andGoogle Kubernetes Engine (GKE)—as well as external backends outsideGoogle Cloud.
The following diagram provides a high-level overview of the different types of Application Load Balancers that can be deployed externally or internally depending onwhether your application is internet-facing or internal.
External Application Load Balancers are implemented asmanaged services either onGoogle Front Ends(GFEs) orEnvoy proxies. Clients canconnect to these load balancers from anywhere on the internet. Note thefollowing:
- These load balancers can be deployed in the following modes:global,regional, or classic.
- Global external Application Load Balancers support backends in multiple regions.
- Regional external Application Load Balancers support backends in a single region only.
- Classic Application Load Balancers are global in Premium Tier.In Standard Tier, they can distribute traffic to backends in a singleregion only.
- Application Load Balancers use the opensource Envoy proxy to enableadvanced traffic managementcapabilities.
Internal Application Load Balancers are built on theAndromeda network virtualization stack and the open source Envoy proxy. Thisload balancer provides internal proxy-based load balancing of Layer 7application data. The load balancer uses an internal IP address that isaccessible only to clients in the same VPC network or clientsconnected to your VPC network. Note the following:
- These load balancers can be deployed in the following modes:regional orcross-region.
- Regional internal Application Load Balancers support backends only in a single region.
- Cross-region internal Application Load Balancers support backends inmultiple regions and are always globally accessible. Clients from anyGoogle Cloud region can send traffic to the load balancer.
To learn more about Application Load Balancers, seeApplication Load Balancer overview.
Network Load Balancers
Network Load Balancers are Layer 4 load balancers that can handle TCP, UDP,or other IP protocol traffic. These load balancers are available as either proxyload balancers or passthrough load balancers. You can pick a load balancerdepending on the needs of your application and the type of traffic that it needsto handle. Choose a proxy Network Load Balancer if you want to configure a reverse proxyload balancer with support for advanced traffic controls and backendson-premises and in other cloud environments. Choose a passthrough Network Load Balancer if youwant to preserve the source IP address of the client packets, you prefer directserver return for responses, or you want to handle a variety of IP protocolssuch as TCP, UDP, ESP, GRE, ICMP, and ICMPv6.
Proxy Network Load Balancers
Proxy Network Load Balancers are Layer 4 reverse proxy load balancers that distributeTCP traffic to virtual machine (VM) instances in your Google CloudVPC network. Traffic is terminated at the load balancing layerand then forwarded to the closest available backend by using TCP.
The following diagram provides a high-level overview of the different types ofproxy Network Load Balancers that can be deployed externally or internally depending onwhether your application is internet-facing or internal.
External proxy Network Load Balancers are Layer 4 loadbalancers that distribute traffic that comes from the internet to backends inyour Google Cloud VPC network, on-premises, or in othercloud environments. These load balancers are built on eitherGoogle FrontEnds (GFEs) orEnvoy proxies.
These load balancers can be deployed in the following modes:global,regional, or classic.
- Global external proxy Network Load Balancers support backends in multiple regions.
- Regional external proxy Network Load Balancers support backends in a single region.
- Classic proxy Network Load Balancers are global in Premium Tier.In Standard Tier, they can distribute traffic to backends in a singleregion only.
Internal proxy Network Load Balancers areEnvoy proxy-based regional Layer 4 load balancers that enable you to run andscale your TCP service traffic behind an internal IP address that isaccessible only to clients in the same VPC network or clientsconnected to your VPC network.
These load balancers can be deployed in one of the following modes: regionalor cross-region.
- Regional internal proxy Network Load Balancers support backends in a single regiononly.
- Cross-region internal proxy Network Load Balancers support backends inmultiple regions and are always globally accessible. Clients from anyGoogle Cloud region can send traffic to the load balancer.
To lean more about proxy Network Load Balancers, seeproxy Network Load Balancer overview.
Passthrough Network Load Balancers
Passthrough Network Load Balancers are Layer 4 regional, passthrough load balancers. Theseload balancers distribute traffic among backends in the same region as the loadbalancer. They are implemented by using Andromeda virtual networking and GoogleMaglev.
As the name suggests, these load balancers are not proxies. Load-balancedpackets are received by backend VMs with the packet's source and destination IPaddresses, protocol, and, if the protocol is port-based, the source anddestination ports unchanged. Load-balanced connections are terminated at thebackends. Responses from the backend VMs go directly to the clients, not backthrough the load balancer. The industry term for this is direct server return(DSR).
These load balancers, as depicted in the following image, are deployed in two modes, depending on whether the load balancer is internet-facing or internal.
External passthrough Network Load Balancersare built on Maglev. Clients can connect to these load balancers from anywhereon the internet regardless of their Network Service Tiers. The load balancercan also receive traffic from Google Cloud VMs with external IPaddresses or from Google Cloud VMs that have internet access throughCloud NAT or instance-based NAT.
Backends for external passthrough Network Load Balancers can be deployed using either abackendservice or atargetpool. For newdeployments, we recommend using backend services.
Internal passthrough Network Load Balancers are built on theAndromeda network virtualization stack. An internal passthrough Network Load Balancer lets you toload balance TCP/UDP traffic behind an internal load-balancing IP address thatis accessible only to systems in the same VPC network orsystems connected to your VPC network. This load balancercan only be configured in Premium Tier.
To learn more about passthrough Network Load Balancers, seepassthrough Network Load Balancer.
Underlying technologies of Google Cloud load balancers
The following table lists the underlying technology upon which eachGoogle Cloud load balancer is built.
- Google Front Ends (GFEs)are software-defined, distributed systems that are located in Google points ofpresence (PoPs) and perform global load balancing in conjunction with othersystems and control planes.
- Andromedais Google Cloud's software-defined network virtualization stack.
- Maglevis a distributed system for Network Load Balancing.
- Envoy is an open source edge and service proxy,designed for cloud-native applications.
| Load balancer | Technology |
|---|---|
| Global external Application Load Balancer | Envoy-based Google Front-End (GFE) |
| Classic Application Load Balancer | GFE |
| Regional external Application Load Balancer | Envoy |
| Cross-region internal Application Load Balancer | Envoy |
| Regional internal Application Load Balancer | Envoy |
| Global external proxy Network Load Balancer | Envoy-based GFE |
| Classic proxy Network Load Balancer | GFE |
| Regional external proxy Network Load Balancer | Envoy |
| Regional internal proxy Network Load Balancer | Envoy |
| Cross-region internal proxy Network Load Balancer | Envoy |
| External passthrough Network Load Balancer | Maglev |
| Internal passthrough Network Load Balancer | Andromeda |
Choose a load balancer
To determine which Cloud Load Balancing product to use, you must firstdetermine what traffic type your load balancers must handle. As a general rule,you'd choose an Application Load Balancer when you need a flexible feature set foryour applications with HTTP(S) traffic. And you'd choose a Network Load Balancerwhen you need TLS offloading at scale or support for UDP, or if you need to exposeclient IP addresses to your applications.
You can further narrow down your choices depending on your application'srequirements: whether your application is external (internet-facing) orinternal, whether you need backends deployed globally or regionally, and whetheryou need Premium or Standard Network Service Tier.
The following diagram shows all of the available deployment modes forCloud Load Balancing. For more details, see theChoose a loadbalancer guide.
1. Global external Application Load Balancers support twomodes of operation: global and classic.
2. Global external proxy Network Load Balancerssupport two modes of operation: globaland classic.
3. Passthrough Network Load Balancers preserve client source IP addresses.Passthrough Network Load Balancers also support additional protocols like UDP, ESP, andICMP.
Summary of types of Google Cloud load balancers
The following table provides details, such as the network service tier on whicheach load balancer operates, along with its load balancing scheme.
| Load balancer | Deployment mode | Traffic type | Network service tier | Load-balancing scheme1 |
|---|---|---|---|---|
| Application Load Balancers | Global external | HTTP or HTTPS | Premium Tier | EXTERNAL_MANAGED |
| Regional external | HTTP or HTTPS | Premium or Standard Tier | EXTERNAL_MANAGED | |
| Classic | HTTP or HTTPS | Global in Premium Tier Regional in Standard Tier | EXTERNAL2 | |
| Regional internal3 | HTTP or HTTPS | Premium Tier | INTERNAL_MANAGED | |
| Cross-region internal | HTTP or HTTPS | Premium Tier | INTERNAL_MANAGED | |
| Proxy Network Load Balancers | Global external | TCP with optional SSL offload | Premium Tier | EXTERNAL_MANAGED |
| Regional external | TCP | Premium or Standard Tier | EXTERNAL_MANAGED | |
| Classic | TCP with optional SSL offload | Global in Premium Tier Regional in Standard Tier | EXTERNAL | |
| Regional internal3 | TCP without SSL offload | Premium Tier | INTERNAL_MANAGED | |
| Cross-region internal | TCP without SSL offload | Premium Tier | INTERNAL_MANAGED | |
| Passthrough Network Load Balancers | External Always regional | TCP, UDP, ESP, GRE, ICMP, and ICMPv6 | Premium or Standard Tier | EXTERNAL |
Internal3 Always regional | TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE | Premium Tier | INTERNAL |
1 The load-balancing scheme is an attribute on theforwarding rule andthebackend service of aload balancer and indicates whether the load balancer can be used for internalor external traffic.
The termmanaged inEXTERNAL_MANAGEDorINTERNAL_MANAGED indicatesthat the load balancer is implemented as a managed service either on aGoogle FrontEnd (GFE) or on the open sourceEnvoyproxy. In a load-balancing scheme that ismanaged, requests arerouted either to the GFE or to the Envoy proxy.
EXTERNAL_MANAGED backend services toEXTERNAL forwarding rules. However,EXTERNAL backendservices cannot be attached toEXTERNAL_MANAGED forwarding rules.To take advantage ofnew features availableonly with the global external Application Load Balancer, werecommend that you migrate your existingEXTERNAL resources toEXTERNAL_MANAGED by using the migration process described atMigrateresources from classic to global external Application Load Balancer.3 By default, regional internal load balancers only allow trafficfrom clients in the same region as the load balancer. However, you can allowtraffic from clients in other regions by enablingglobal access on theforwarding rule.
Interfaces
You can configure and update your load balancers by using thefollowing interfaces:
TheGoogle Cloud CLI: A command-line tool included in theGoogle Cloud CLI; the documentation callson this tool frequently to accomplish tasks. For a complete overview ofthe tool, see thegcloud CLI guide. You canfind commands related to load balancing in the
gcloud computecommand group.You can also get detailed help for any
gcloudcommand by using the--helpflag.gcloud compute http-health-checks create --helpTheGoogle Cloud console: Load-balancing tasks can be accomplishedby using theGoogle Cloud console.
TheREST API: All load-balancing tasks can be accomplished by using theCloud Load Balancing API. TheAPI reference docs describe theresources and methods available to you.
Terraform: You can provision, update, and delete the Google Cloudload-balancing infrastructure by using an open source infrastructure-as-codetool such asTerraform.
What's next
- To help you determine which Google Cloud load balancer best meets yourneeds, seeChoose a load balancer.
- To understand the components of different types of Google Cloud loadbalancers, seeCloud Load Balancing resource model.
- To see a comparative overview of the load-balancing features offered byCloud Load Balancing, seeLoad balancer featurecomparison.
- To use prebuilt Terraform templates to streamline the setup and management of Google Cloud's networking infrastructure, explore theSimplified Cloud Networking Configuration Solutions GitHub repository.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.