Gated egress

Last reviewed 2025-01-23 UTC

The architecture of thegated egress networking pattern is based on exposingselect APIs from the on-premises environment or another cloud environment toworkloads that are deployed in Google Cloud. It does so without directlyexposing them to the public internet from an on-premises environment or fromother cloud environments. You can facilitate this limited exposure through anAPI gateway or proxy, or a load balancer that serves as afacade for existing workloads. You can deploy the API gateway functionality in anisolated perimeter network segment, like aperimeter network.

Thegated egress networking pattern applies primarily to (but isn't limitedto)tiered application architecture patterns andpartitioned application architecture patterns.When deploying backend workloads within an internal network, gated egressnetworking helps to maintain a higher level of security within your on-premisescomputing environment. The pattern requires that you connect computingenvironments in a way that meets the following communication requirements:

  • Workloads that you deploy in Google Cloud can communicate withthe API gateway or load balancer (or a Private Service Connectendpoint) that exposes the application by using internal IP addresses.
  • Other systems in the private computing environment can't be reacheddirectly from within Google Cloud.
  • Communication from the private computing environment to any workloadsdeployed in Google Cloud isn't allowed.
  • Traffic to the private APIs in other environments is only initiated fromwithin the Google Cloud environment.

The focus of this guide is on hybrid and multicloud environments connectedover a private hybrid network. If the security requirements of your organizationpermit it, API calls to remote target APIs with public IP addresses can bedirectly reached over the internet. But you must consider the following securitymechanisms:

  • APIOAuth 2.0 with Transport Layer Security (TLS).
  • Rate limiting.
  • Threat protection policies.
  • Mutual TLS configured to the backend of your API layer.
  • IP address allowlist filtering configured to only allow communicationwith predefined API sources and destinations from both sides.

To secure an API proxy, consider theseother security aspects.For more information,seeBest practices for securing your applications and APIs using Apigee.

Architecture

The following diagram shows a reference architecture that supports thecommunication requirements listed in the previous section:

Data flows in one direction from a host project in Google Cloud to a workload in an on-premises environment.

Data flows through the preceding diagram as follows:

  • On the Google Cloud side, you can deploy workloads intovirtual private clouds (VPCs). The VPCs can be single or multiple (shared ornon-shared). The deployment should be in alignment with the projects andresource hierarchy design of your organization.
  • The VPC networks of the Google Cloud environment are extended tothe other computing environments. The environments can be on-premises or inanother cloud. To facilitate the communication between environments usinginternal IP addresses, use a suitable hybrid and multicloud networkingconnectivity.
  • To limit the traffic that originates from specific VPC IP addresses, andis destined for remote gateways or load balancers, use IP address allowlistfiltering. Return traffic from these connections is allowed when usingstateful firewall rules.You can use any combination of the following capabilities to secure andlimit communications to only the allowed source and destination IP addresses:

  • All environments share overlap-free RFC 1918 IP address space.

Variations

Thegated egress architecture pattern can be combined with other approachesto meet different design requirements that still consider the communicationrequirements of this pattern. The pattern offers the following options:

Use Google Cloud API gateway and global frontend

Data flowing in Google Cloud from Apigee to a customer project VPC and then out of Cloud to an on-premises environment or another cloud instance.

With this design approach, API exposure and management reside withinGoogle Cloud. As shown in the preceding diagram, you can accomplish thisthrough the implementation of Apigee as the API platform. Thedecision to deploy an API gateway or load balancer in the remote environmentdepends on your specific needs and current configuration. Apigee providestwo options for provisioning connectivity:

  • With VPC peering
  • Without VPC peering

Google Cloud global frontend capabilities like Cloud Load Balancing,Cloud CDN (when accessed over Cloud Interconnect), andCross-Cloud Interconnect enhance the speed with which users can accessapplications that have backends hosted in your on-premises environments and inother cloud environments.

Optimizing content delivery speeds is achieved by delivering those applicationsfrom Google Cloud points of presence (PoP). Google Cloud PoPs arepresent on over180 internet exchanges and at over 160 interconnection facilities around the world.

To see how PoPs help to deliver high-performing APIs when usingApigee with Cloud CDN to accomplish the following, watchDelivering high-performing APIs with Apigee and Cloud CDN on YouTube:

  • Reduce latency.
  • Host APIs globally.
  • Increase availability for peak traffic.

The design example illustrated in the preceding diagram is based onPrivate Service Connect without VPC peering.

The northbound network in this design is established through:

  • A load balancer (LB in the diagram), where client requests terminate, processes the traffic andthen routes it to a Private Service Connect backend.
  • APrivate Service Connect backend lets a Google Cloud load balancer send clients requests over aPrivate Service Connect connection associated with a producerservice attachment to the published service (Apigee runtimeinstance) usingPrivate Service Connect network endpoint groups (NEGs).

The southbound networking is established through:

  • A Private Service Connect endpoint that references aservice attachment associated with an internal load balancer (ILB in the diagram) in thecustomer VPC.
  • The ILB is deployed with hybrid connectivity networkendpoint groups(hybrid connectivity NEGs).

  • Hybrid services are accessed through the hybrid connectivity NEG over ahybrid network connectivity, like VPN or Cloud Interconnect.

For more information, seeSet up a regional internal proxy Network Load Balancer with hybrid connectivity andPrivate Service Connect deployment patterns.

Note: Depending on your requirements, the APIs of the on-premises backends canbe exposed throughApigee Hybrid,a third party API gateway or proxy, or a load balancer.

Expose remote services using Private Service Connect

Data flowing from Google Cloud to an on-premises environment or another cloud, after originating from a workload in a VPC, and traveling through Cloud Load Balancing, a hybrid connectivity NEG, and a Cloud VPN or interconnect.

Use the Private Service Connect option to expose remote servicesfor the following scenarios:

  • You aren't using an API platform or you want to avoid connecting yourentire VPC network directly to an external environment for the followingreasons:
    • You have security restrictions or compliance requirements.
    • You have an IP address range overlap, such as in a merger andacquisition scenario.
  • To enable secure uni-directional communications between clients,applications, and services across the environments even when you have ashort deadline.
  • You might need to provide connectivity to multiple consumer VPCs through aservice-producer VPC (transit VPC) to offer highly scalable multi-tenant orsingle-tenant service models, to reach published services on other environments.

Using Private Service Connect for applications that are consumedas APIs provides an internal IP address for the published applications, enablingsecure access within the private network acrossregions and over hybridconnectivity. This abstractionfacilitates the integration of resources from diverse clouds and on-premisesenvironments over a hybrid and multicloud connectivity model.You can accelerate application integration and securely expose applications thatreside in an on-premises environment, or another cloud environment, by usingPrivate Service Connect topublish the service with fine-grained access. In this case, you can use the following option:

In the preceding diagram, the workloads in the VPC network of your applicationcan reach the hybrid services running in your on-premises environment, or inother cloud environments, through the Private Service Connectendpoint, as illustrated in the following diagram. This design option foruni-directional communications provides an alternative option topeering to a transit VPC.

Data flowing through and between multiple VPCs inside Google Cloud before exiting through a Cloud VPN or Cloud Interconnect and into an on-premises environment or another cloud.

As part of the design in the preceding diagram, multiple frontends, backends, orendpoints can connect to the sameservice attachment,which lets multiple VPC networks or multiple consumers access the same service.As illustrated in the following diagram, you can make the application accessibleto multiple VPCs. This accessibility can help inmulti-tenant services scenarios where your service is consumed by multiple consumer VPCs even if theirIP address ranges overlap.

IP address overlap is one of most common issues when integrating applicationsthat reside in different environments. ThePrivate Service Connect connection in the following diagram helps toavoid the IP address overlap issue. It does so without requiring provisioning ormanaging any additional networking components, like Cloud NAT or an NVA, toperform the IP address translation. For an example configuration, seePublish a hybrid service by using Private Service Connect.

The design has the following advantages:

  • Avoids potential shared scaling dependencies and complex manageabilityat scale.
  • Improves security by providing fine-grained connectivity control.
  • Reduces IP address coordination between the producer and consumer of theservice and the remote external environment.

The design approach in the preceding diagram can expand at later stages tointegrate Apigee as the API platform by using the networkingdesign options discussed earlier, including thePrivate Service Connect option.

You can make the Private Service Connect endpoint accessible fromother regions by usingPrivate Service Connect global access.

The client connecting to the Private Service Connect endpoint canbe in the same region as the endpoint or in a different region. This approachmight be used to provide high availability across services hosted in multipleregions, or to access services available in a single region from other regions.When a Private Service Connect endpoint is accessedby resources hosted in other regions,inter-regional outbound charges apply to the traffic destined to endpoints with global access.

Note: To achieve distributed wellness checks and to facilitate connectingmultiple VPCs to on-premises environments over multiple hybrid connections,chain an internal Application Load Balancer with an external Application Load Balancer. Formore information, seeExplicit Chaining of Google Cloud L7 Load Balancers with PSC.

Best practices

  • ConsideringApigee and Apigee Hybrid as your API platform solution offersseveral benefits. It provides a proxy layer, and an abstraction or facade,for your backend service APIs combined with security capabilities, ratelimiting, quotas, and analytics.
  • VPCs and project design in Google Cloud should be driven by yourresource hierarchy and your secure communication model requirements.
  • When APIs with API gateways are used, you should also use an IP addressallowlist. An allowlist limits communications to the specific IP addresssources and destinations of the API consumers and API gateways that mightbe hosted in different environments.
  • UseVPC firewall rules orfirewall policies to control access to Private Service Connect resourcesthrough the Private Service Connect endpoint.
  • If an application is exposed externally through an application loadbalancer, consider usingGoogle Cloud Armor as an extra layer of security to protect against DDoS and application layersecurity threats.
  • If instances require internet access, useCloud NAT in the application (consumer) VPC to allow workloads to access theinternet. Doing so lets you avoid assigning VM instances with externalpublic IP addresses in systems that are deployed behind an API gateway or aload balancer.

  • Review thegeneral best practices for hybrid and multicloud networking patterns.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-01-23 UTC.