Hybrid and multicloud secure networking architecture patterns Stay organized with collections Save and categorize content based on your preferences.
This document is the third of three documents in a set. It discusses hybridand multicloud networking architecture patterns. This part explores severalcommon secure network architecture patterns that you can use for hybrid andmulticloud architectures. It describes the scenarios that these networkingpatterns are best suited for, and provides best practices for implementing themwith Google Cloud.
The document set for hybrid and multicloud architecture patterns consists ofthese parts:
- Build hybrid and multicloud architectures:discusses planning a strategy for architecting a hybrid and multicloudsetup with Google Cloud.
- Hybrid and multicloud architecture patterns:discusses common architecture patterns to adopt as part of a hybrid andmulticloud strategy.
- Hybrid and multicloud secure networking architecture patterns:discusses hybrid and multicloud networking architecture patterns from anetworking perspective (this document).
Connecting private computing environments to Google Cloud securely andreliably is essential for any successful hybrid and multicloud architecture.The hybrid networking connectivity and cloud networking architecture pattern youchoose for a hybrid and multicloud setup must meet the unique requirements ofyour enterprise workloads. It must also suit the architecture patterns youintend to apply. Although you might need to tailor each design, there are common patterns you can use as a blueprint.
The networking architecture patterns in this document shouldn't beconsidered alternatives to thelanding zone design in Google Cloud.Instead, you should design and deploy the architecture patterns youselect as part of the overall Google Cloud landing zone design, whichspans the following areas:
- Identities
- Resource management
- Security
- Networking
- Monitoring
Different applications can use different networking architecture patterns,which are incorporated as part of a landing zone architecture. In a multicloudsetup, you should maintain the consistency of the landing zone design across allenvironments.
This series contains the following pages:
Contributors
Author:Marwan Al Shawi | Partner Customer Engineer
Other contributors:
- Saud Albazei | Customer Engineer, Application Modernization
- Anna Berenberg | Engineering Fellow
- Marco Ferrari | Cloud Solutions Architect
- Victor Moreno | Product Manager, Cloud Networking
- Johannes Passing | Cloud Solutions Architect
- Mark Schlagenhauf | Technical Writer, Networking
- Daniel Strebel | EMEA Solution Lead, Application Modernization
- Ammett Williams | Developer Relations Engineer
Architecture patterns
The documents in this series discuss networking architecture patterns that aredesigned based on the required communication models between applicationsresiding in Google Cloud and in other environments (on-premises, in otherclouds, or both).
These patterns should be incorporated into the overall organization landingzone architecture, which can include multiple networking patterns to address thespecific communication and security requirements of different applications.
The documents in this series also discuss the different design variations thatcan be used with each architecture pattern. The following networkingpatterns can help you to meet communication and security requirements foryour applications:
Mirrored pattern
Themirrored pattern is based on replicating the design of a certain existingenvironment or environments to a new environment or environments. Therefore,this pattern applies primarily to architectures that follow theenvironment hybrid pattern.In that pattern, you run your development and testing workloads in oneenvironment while you run your staging and production workloads in another.
The mirrored pattern assumes that testing and production workloads aren'tsupposed to communicate directly with one another. However, it should bepossible to manage and deploy both groups of workloads in a consistent manner.
If you use this pattern, connect the two computing environments in a way thataligns with the following requirements:
- Continuous integration/continuous deployment (CI/CD) can deploy andmanage workloads across all computing environments or specific environments.
- Monitoring, configuration management, and other administrative systemsshould work across computing environments.
- Workloads can't communicate directly across computing environments. Ifnecessary, communication has to be in a fine-grained and controlled fashion.
Architecture
The following architecture diagram shows a high level reference architecture ofthis pattern that supports CI/CD, Monitoring, configuration management,other administrative systems, and workload communication:
The description of the architecture in the preceding diagram is as follows:
- Workloads are distributed based on the functional environments(development, testing, CI/CD and administrative tooling) across separateVPCs on the Google Cloud side.
- Shared VPC is used for development and testing workloads. An extra VPC is used for theCI/CD and administrative tooling. With shared VPCs:
- The applications are managed by different teams per environmentand per service project.
- The host project administers and controls the networkcommunication and security controls between the development and testenvironments—as well as to outside the VPC.
- CI/CD VPC is connected to the network running the production workloads inyour private computing environment.
- Firewall rules permit only allowed traffic.
- You might also useCloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packetinspection for threat prevention without changing the design orrouting. Cloud Next Generation Firewall Enterprise works by creating Google-managed zonalfirewall endpoints that use packet intercept technology totransparently inspect the workloads for the configured threatsignatures. It also protects workloads against threats.
- Enables communication among the peered VPCs using internal IP addresses.
- The peering in this pattern allows CI/CD and administrativesystems to deploy and manage development and testing workloads.
- Consider thesegeneral best practices.
You establish this CI/CD connection by using one of the discussedhybrid and multicloud networking connectivity options that meet your business and applications requirements. To let you deploy andmanage production workloads, this connection provides private networkreachability between the different computing environments. All environmentsshould have overlap-free RFC 1918 IP address space.
If the instances in the development and testing environments require internetaccess, consider the following options:
- You can deployCloud NAT into the same Shared VPC host project network. Deploying into thesame Shared VPC host project network helps to avoid making theseinstances directly accessible from the internet.
- For outbound web traffic, you can useSecure Web Proxy.The proxy offersseveral benefits.
For more information about the Google Cloud tools and capabilitiesthat help you to build, test, and deploy in Google Cloud and across hybridand multicloud environments, see theDevOps and CI/CD on Google Cloud explained blog.
Variations
To meet different design requirements, while still considering allcommunication requirements, themirrored architecture pattern offers theseoptions, which are described in the following sections:
- Shared VPC per environment
- Centralized application layer firewall
- Hub-and-spoke topology
- Microservices zero trust distributed architecture
Shared VPC per environment
The shared VPC per environment design option allows for application-or service-level separation across environments, including CI/CD andadministrative tools that might be required to meet certain organizationalsecurity requirements. These requirements limit communication, administrativedomain, and access control for different services that also need to be managedby different teams.
This design achieves separation by providing network- and project-levelisolation between the different environments, which enables more fine-grainedcommunication andIdentity and Access Management (IAM) access control.
From a management and operations perspective, this design provides theflexibility to manage the applications and workloads created by different teamsper environment and per service project. VPC networking, and its securityfeatures can be provisioned and managed by networking operations teams based onthe following possible structures:
- One team manages all host projects across all environments.
- Different teams manage the host projects in their respective environments.
Decisions about managing host projects should be based on the team structure,security operations, and access requirements of each team. You can apply thisdesign variation to theShared VPC network for each environment landing zone design option.However, you need to consider the communication requirements of themirroredpattern to define what communication is allowed between the differentenvironments, including communication over the hybrid network.
You can also provision a Shared VPC network for each main environment,as illustrated in the following diagram:
Centralized application layer firewall
In some scenarios, the security requirements might mandate the consideration ofapplication layer (Layer 7) and deep packet inspection with advanced firewallingmechanisms that exceed the capabilities of Cloud Next Generation Firewall. To meet thesecurity requirements and standards of your organization, you can use an NGFWappliancehosted in a network virtual appliance (NVA).Several Google Cloudsecurity partners offer options well suited to this task.
As illustrated in the following diagram, you can place the NVA in the networkpath between Virtual Private Cloud and the private computing environment usingmultiple network interfaces.
This design also can be used with multiple shared VPCs as illustrated in thefollowing diagram.
The NVA in this design acts as the perimeter security layer. It also serves asthe foundation for enabling inline traffic inspection and enforcing strictaccess control policies.
For a robust multilayer security strategy that includes VPC firewall rules andintrusion prevention service capabilities, include further traffic inspectionand security control to both east-west and north-south traffic flows.
Note: In supported cloudregions,and when technically feasible for your design,NVAs can be deployed without requiring multiple VPC networks or applianceinterfaces. This deployment is based on using load balancing andpolicy-based routing capabilities. These capabilities enable a topology-independent, policy-drivenmechanism for integrating NVAs into your cloud network. For more details, seeDeploy network virtual appliances (NVAs) without multiple VPCs.Hub-and-spoke topology
Another possible design variation is to use separate VPCs (including sharedVPCs) for your development and different testing stages. In this variation, asshown in the following diagram, all stage environments connect with the CI/CDand administrative VPC in a hub-and-spoke architecture. Use this option if youmust separate the administrative domains and the functions in each environment.The hub-and-spoke communication model can help with the following requirements:
- Applications need to access a common set of services, like monitoring,configuration management tools, CI/CD, or authentication.
- A common set of security policies needs to be applied to inbound andoutbound traffic in a centralized manner through the hub.
For more information about hub-and-spoke design options, seeHub-and-spoke topology with centralized appliances andHub-and-spoke topology without centralized appliances.
As shown in the preceding diagram, the inter-VPC communication and hybridconnectivity all pass through the hub VPC. As part of this pattern, you cancontrol and restrict the communication at the hub VPC to align with yourconnectivity requirements.
As part of the hub-and-spoke network architecture the following are the primaryconnectivity options (between the spokes and hub VPCs) on Google Cloud:
- VPC Network Peering
- VPN
- Using network virtual appliance (NVA)
- With multiple network interfaces
- WithNetwork Connectivity Center (NCC)
For more information on which option you should consider in your design, seeHub-and-spoke network architecture.A key influencing factor for selecting VPN over VPC peering between the spokesand the hub VPC is whentraffic transitivity is required. Traffic transitivity means that traffic from a spoke can reachother spokes through the hub.
Microservices zero trust distributed architecture
Hybrid and multicloud architectures can requiremultiple clusters to achieve their technical and business objectives, including separating theproduction environment from the development and testing environments. Therefore,network perimeter security controls are important, especially when they'rerequired to comply with certain security requirements.
It's not enough to support the security requirements of current cloud-firstdistributed microservices architectures, you should also consider zero trustdistributed architectures. The microservices zero trust distributed architecturesupports your microservices architecture with microservice level security policyenforcement, authentication, and workload identity. Trust isidentity-based and enforced for each service.
By using a distributed proxy architecture, such as a service mesh, services caneffectively validate callers and implement fine-grained access control policiesfor each request, enabling a more secure and scalable microservices environment.Cloud Service Mesh gives you the flexibility to have a common mesh that can span yourGoogle Cloud and on-premises deployments. The mesh uses authorizationpolicies to help secure service-to-service communications.
You might also incorporateApigee Adapter for Envoy,which is a lightweight Apigee API gateway deployment within aKubernetes cluster, with this architecture. Apigee Adapter for Envoyis an open source edge and service proxy that's designed for cloud-firstapplications.
For more information about this topic, see the following articles:
- Zero Trust Distributed Architecture
- GKE Enterprise hybrid environment
- Connect to Google
- Connect an on-premises GKE Enterprise cluster to aGoogle Cloud network.
- Set up a multicloud or hybrid mesh
- Deploy Cloud Service Mesh across environments and clusters.
Mirrored pattern best practices
- The CI/CD systems required for deploying or reconfiguring productiondeployments must be highly available, meaning that all architecturecomponents must be designed to provide the expected level of systemavailability. For more information, seeGoogle Cloud infrastructure reliability.
- To eliminate configuration errors for repeated processes like codeupdates, automation is essential to standardize your builds, tests, anddeployments.
- The integration of centralized NVAs in this design might require theincorporation of multiple segments with varying levels of security accesscontrols.
- When designing a solution that includes NVAs, it's important to considerthe high availability (HA) of the NVAs to avoid a single point of failurethat could block all communication. Follow the HA and redundancy design andimplementation guidance provided by your NVA vendor.
- By not exporting on-premises IP routes over VPC peering or VPN to thedevelopment and testing VPC, you can restrict network reachability fromdevelopment and testing environments to the on-premises environment. Formore information, seeVPC Network Peering custom route exchange.
- For workloads with private IP addressing that can require Google's APIsaccess, you can exposeGoogle APIs by using aPrivate Service Connect endpoint within a VPC network. For more information, seeGated ingress,in this series.
- Review thegeneral best practices for hybrid and multicloud networking architecture patterns.
Meshed pattern
Themeshed pattern is based on establishing a hybrid network architecture.That architecture spans multiple computing environments. In these environments,all systems can communicate with one another and aren't limited to one-waycommunication based on the security requirements of your applications.This networking pattern applies primarily totiered hybrid,partitioned multicloud,orbursting architectures. It's also applicable tobusiness continuity design to provision a disaster recovery (DR) environment in Google Cloud. In allcases, it requires that you connect computing environments in a way that alignwith the following communication requirements:
- Workloads can communicate with one another across environmentboundaries using private RFC 1918 IP addresses.
- Communication can be initiated from either side. The specifics of thecommunications model can vary based on the applications and securityrequirements, such as the communication models discussed in the designoptions that follow.
- The firewall rules that you use must allow traffic between specific IPaddress sources and destinations based on the requirements of theapplication, or applications, for which the pattern is designed. Ideally,you can use a multi-layered security approach to restrict traffic flows ina fine-grained fashion, both between and within computing environments.
Architecture
The following diagram illustrates a high level reference architecture of themeshed pattern.
- All environments should use an overlap-free RFC 1918 IP address space.
- On the Google Cloud side, you can deploy workloads intoa singleor multiple shared VPCs or non-shared VPCs. For other possible designoptions of this pattern, refer to the design variations that follow. Theselected structure of your VPCs should align with the projects andresources hierarchy design of your organization.
- The VPC network of Google Cloud extends to othercomputingenvironments. Those environments can be on-premises or in another cloud.Use one of thehybrid and multicloud networking connectivity options that meet your business and application requirements.
Limit communications to only the allowed IP addresses of your sourcesand destinations. Use any of the following capabilities, or a combinationof them:
Network virtual appliance (NVA) with next generation firewall(NGFW) inspection capabilities, placed in the network path.
Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packetinspection for threat prevention without changing the network design orrouting.
Variations
Themeshed architecture pattern can be combined with other approaches to meetdifferent design requirements, while still considering the communicationrequirements of the pattern. The pattern options are described in the following sections:
- One VPC per environment
- Use a centralized application layer firewall
- Microservices zero trust distributed architecture
One VPC per environment
The common reasons to consider the one-VPC-per-environment option are asfollows:
- The cloud environment requires network-level separation of the VPCnetworks and resources, in alignment with your organization'sresource hierarchy design.If administrative domain separation is required, it can also be combinedwith a separate project per environment.
- To centrally manage network resources in a common network andprovide network isolation between the different environments, use ashared VPC for each environment that you have in Google Cloud, such as development, testing, andproduction.
- Scale requirements that might need to go beyond theVPC quotas for a single VPC or project.
As illustrated in the following diagram, the one-VPC-per-environment design letseach VPC integrate directly with the on-premises environment or other cloudenvironments using VPNs, or a Cloud Interconnect with multipleVLAN attachments.
The pattern displayed in the preceding diagram can be applied on a landing zonehub-and-spoke network topology.In that topology, a single (or multiple) hybrid connection can be shared withall spoke VPCs. It's shared by using a transit VPC to terminate both the hybridconnectivity and the other spoke VPCs. You can also expand this design by addingNVA with next-generation firewall (NGFW) inspection capabilities at the transitVPC, as described in the next section, "Use a centralized application layerfirewall."
Use a centralized application layer firewall
If your technical requirements mandate considering application layer (Layer 7)and deep packet inspection with advanced firewalling capabilities that exceedthe capabilities of Cloud Next Generation Firewall, you can use an NGFW appliance hosted inan NVA. However, that NVA must meet the security needs of your organization. Toimplement these mechanisms, you can extend the topology to pass allcross-environment traffic through a centralized NVA firewall, as shown in thefollowing diagram.
You can apply the pattern in the following diagram on the landing zone design byusing ahub-and-spoke topology with centralized appliances:
As shown in the preceding diagram, The NVA acts as the perimeter security layerand serves as the foundation for enabling inline traffic inspection. It alsoenforces strict access control policies. To inspect both east-west andnorth-south traffic flows, the design of a centralized NVA might includemultiple segments with different levels of security access controls.
Microservices zero trust distributed architecture
When containerized applications are used, the microservices zero trustdistributed architecture discussed in themirrored pattern section is also applicable to this architecture pattern.
The key difference between this pattern and the mirrored pattern is that the communication model betweenworkloads in Google Cloud and other environments can be initiated fromeither side. Traffic must be controlled and fine-grained, based on theapplication requirements and security requirements usingCloud Service Mesh.
Meshed pattern best practices
- Before you do anything else, decide on yourresource hierarchy design,and the design required to support any project and VPC. Doing so can helpyou select the optimal networking architecture that aligns with thestructure of your Google Cloud projects.
- Use azero trust distributed architecture when using Kubernetes within your private computing environment andGoogle Cloud.
- When you use centralized NVAs in your design, you should define multiplesegments with different levels of security access controls and trafficinspection policies. Base these controls and policies on the securityrequirements of your applications.
- When designing a solution that includes NVAs, it's important to considerthe high availability (HA) of the NVAs to avoid a single point of failurethat could block all communication. Follow the HA and redundancy design andimplementation guidance provided by theGoogle Cloud security vendor that supplies your NVAs.
- To provide increased privacy, data integrity, and a controlledcommunication model, expose applications through APIs using API gateways,likeApigee andApigee hybrid with end-to-end mTLS. You can also use asharedVPC with Apigee in the sameorganization resource.
- If the design of your solution requires exposing a Google Cloudbased application to the public internet, consider the designrecommendations discussed inNetworking for internet-facing application delivery.
- To help protect Google Cloud services in your projects, and tohelp mitigate the risk of data exfiltration, use VPC Service Controls tospecify service perimeters at the project or VPC network level. Also, youcanextend service perimeterstoa hybrid environment over an authorized VPN or Cloud Interconnect.For more information about the benefits of service perimeters, seeOverview of VPC Service Controls.
- Review thegeneral best practices for hybrid and multicloud networking patterns.
If you intend to enforce stricter isolation and more fine-grained accessbetween your applications hosted in Google Cloud, and in otherenvironments, consider using one of thegated patterns that are discussed in the other documents in this series.
Gated patterns
Thegated pattern is based on an architecture that exposes selectapplications and services in a fine-grained manner, based on specific exposedAPIs or endpoints between the different environments. This guide categorizesthis pattern into three possible options, each determined by the specificcommunication model:
- Gated egress
Gated egress and ingress (bidirectional gated in both directions)
As previously mentioned in this guide, the networking architecture patternsdescribed here can be adapted to various applications with diverse requirements.To address the specific needs of different applications, your main landing zonearchitecture might incorporate one pattern or a combination of patternssimultaneously. The specific deployment of the selected architecture isdetermined by the specific communication requirements of each gated pattern.
Note: In general, thegated pattern can be applied or incorporated with thelanding zone design option that exposes the services in aconsumer-producer model.This series discusses each gated pattern and its possible design options.However, one common design option applicable to all gated patterns is theZero Trust Distributed Architecture for containerized applications with microservice architecture. This option ispowered byCloud Service Mesh,Apigee, andApigee Adapter for Envoy—alightweight Apigee gateway deployment within a Kubernetes cluster.Apigee Adapter for Envoy is a popular, open source edge and service proxy that'sdesigned for cloud-first applications. This architecture controls allowed secureservice-to-service communications and the direction of communication at aservice level. Traffic communication policies can be designed, fine-tuned, andapplied at the service level based on the selected pattern.
Gated patterns allow for the implementation of Cloud Next Generation Firewall Enterprisewithintrusion prevention service (IPS) to perform deep packet inspection for threat prevention without any designor routing modifications. That inspection is subject to the specificapplications being accessed, the communication model, and the securityrequirements. If security requirements demand Layer 7 and deep packet inspectionwith advanced firewalling mechanisms that surpass the capabilities ofCloud Next Generation Firewall, you can use a centralized next generation firewall (NGFW)hosted in a network virtual appliance (NVA).Several Google Cloudsecurity partners offer NGFW appliances that can meet your security requirements. Integrating NVAswith these gated patterns can require introducing multiple security zones withinthe network design, each with distinct access control levels.
Gated egress
The architecture of thegated egress networking pattern is based on exposingselect APIs from the on-premises environment or another cloud environment toworkloads that are deployed in Google Cloud. It does so without directlyexposing them to the public internet from an on-premises environment or fromother cloud environments. You can facilitate this limited exposure through anAPI gateway or proxy, or a load balancer that serves as afacade for existing workloads. You can deploy the API gateway functionality in anisolated perimeter network segment, like aperimeter network.
Thegated egress networking pattern applies primarily to (but isn't limitedto)tiered application architecture patterns andpartitioned application architecture patterns.When deploying backend workloads within an internal network, gated egressnetworking helps to maintain a higher level of security within your on-premisescomputing environment. The pattern requires that you connect computingenvironments in a way that meets the following communication requirements:
- Workloads that you deploy in Google Cloud can communicate withthe API gateway or load balancer (or a Private Service Connectendpoint) that exposes the application by using internal IP addresses.
- Other systems in the private computing environment can't be reacheddirectly from within Google Cloud.
- Communication from the private computing environment to any workloadsdeployed in Google Cloud isn't allowed.
- Traffic to the private APIs in other environments is only initiated fromwithin the Google Cloud environment.
The focus of this guide is on hybrid and multicloud environments connectedover a private hybrid network. If the security requirements of your organizationpermit it, API calls to remote target APIs with public IP addresses can bedirectly reached over the internet. But you must consider the following securitymechanisms:
- APIOAuth 2.0 with Transport Layer Security (TLS).
- Rate limiting.
- Threat protection policies.
- Mutual TLS configured to the backend of your API layer.
- IP address allowlist filtering configured to only allow communicationwith predefined API sources and destinations from both sides.
To secure an API proxy, consider theseother security aspects.For more information,seeBest practices for securing your applications and APIs using Apigee.
Architecture
The following diagram shows a reference architecture that supports thecommunication requirements listed in the previous section:
Data flows through the preceding diagram as follows:
- On the Google Cloud side, you can deploy workloads intovirtual private clouds (VPCs). The VPCs can be single or multiple (shared ornon-shared). The deployment should be in alignment with the projects andresource hierarchy design of your organization.
- The VPC networks of the Google Cloud environment are extended tothe other computing environments. The environments can be on-premises or inanother cloud. To facilitate the communication between environments usinginternal IP addresses, use a suitable hybrid and multicloud networkingconnectivity.
To limit the traffic that originates from specific VPC IP addresses, andis destined for remote gateways or load balancers, use IP address allowlistfiltering. Return traffic from these connections is allowed when usingstateful firewall rules.You can use any combination of the following capabilities to secure andlimit communications to only the allowed source and destination IP addresses:
Network virtual appliance (NVA) with next generation firewall(NGFW) inspection capabilities that are placed in the network path.
Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packetinspection for threat prevention.
All environments share overlap-free RFC 1918 IP address space.
Variations
Thegated egress architecture pattern can be combined with other approachesto meet different design requirements that still consider the communicationrequirements of this pattern. The pattern offers the following options:
- Use Google Cloud API gateway and global frontend
- Expose remote services using Private Service Connect
Use Google Cloud API gateway and global frontend
With this design approach, API exposure and management reside withinGoogle Cloud. As shown in the preceding diagram, you can accomplish thisthrough the implementation of Apigee as the API platform. Thedecision to deploy an API gateway or load balancer in the remote environmentdepends on your specific needs and current configuration. Apigee providestwo options for provisioning connectivity:
- With VPC peering
- Without VPC peering
Google Cloud global frontend capabilities like Cloud Load Balancing,Cloud CDN (when accessed over Cloud Interconnect), andCross-Cloud Interconnect enhance the speed with which users can accessapplications that have backends hosted in your on-premises environments and inother cloud environments.
Optimizing content delivery speeds is achieved by delivering those applicationsfrom Google Cloud points of presence (PoP). Google Cloud PoPs arepresent on over180 internet exchanges and at over 160 interconnection facilities around the world.
To see how PoPs help to deliver high-performing APIs when usingApigee with Cloud CDN to accomplish the following, watchDelivering high-performing APIs with Apigee and Cloud CDN on YouTube:
- Reduce latency.
- Host APIs globally.
- Increase availability for peak traffic.
The design example illustrated in the preceding diagram is based onPrivate Service Connect without VPC peering.
The northbound network in this design is established through:
- A load balancer (LB in the diagram), where client requests terminate, processes the traffic andthen routes it to a Private Service Connect backend.
- APrivate Service Connect backend lets a Google Cloud load balancer send clients requests over aPrivate Service Connect connection associated with a producerservice attachment to the published service (Apigee runtimeinstance) usingPrivate Service Connect network endpoint groups (NEGs).
The southbound networking is established through:
- A Private Service Connect endpoint that references aservice attachment associated with an internal load balancer (ILB in the diagram) in thecustomer VPC.
The ILB is deployed with hybrid connectivity networkendpoint groups(hybrid connectivity NEGs).
Hybrid services are accessed through the hybrid connectivity NEG over ahybrid network connectivity, like VPN or Cloud Interconnect.
For more information, seeSet up a regional internal proxy Network Load Balancer with hybrid connectivity andPrivate Service Connect deployment patterns.
Note: Depending on your requirements, the APIs of the on-premises backends canbe exposed throughApigee Hybrid,a third party API gateway or proxy, or a load balancer.Expose remote services using Private Service Connect
Use the Private Service Connect option to expose remote servicesfor the following scenarios:
- You aren't using an API platform or you want to avoid connecting yourentire VPC network directly to an external environment for the followingreasons:
- You have security restrictions or compliance requirements.
- You have an IP address range overlap, such as in a merger andacquisition scenario.
- To enable secure uni-directional communications between clients,applications, and services across the environments even when you have ashort deadline.
- You might need to provide connectivity to multiple consumer VPCs through aservice-producer VPC (transit VPC) to offer highly scalable multi-tenant orsingle-tenant service models, to reach published services on other environments.
Using Private Service Connect for applications that are consumedas APIs provides an internal IP address for the published applications, enablingsecure access within the private network acrossregions and over hybridconnectivity. This abstractionfacilitates the integration of resources from diverse clouds and on-premisesenvironments over a hybrid and multicloud connectivity model.You can accelerate application integration and securely expose applications thatreside in an on-premises environment, or another cloud environment, by usingPrivate Service Connect topublish the service with fine-grained access. In this case, you can use the following option:
- A service attachment that references aregional internal proxy Network Load Balancer or aninternal Application Load Balancer.
- The load balancer uses a hybrid networkendpoint group (hybrid connectivity NEG) in a producer VPC that acts inthis design as a transit VPC.
In the preceding diagram, the workloads in the VPC network of your applicationcan reach the hybrid services running in your on-premises environment, or inother cloud environments, through the Private Service Connectendpoint, as illustrated in the following diagram. This design option foruni-directional communications provides an alternative option topeering to a transit VPC.
As part of the design in the preceding diagram, multiple frontends, backends, orendpoints can connect to the sameservice attachment,which lets multiple VPC networks or multiple consumers access the same service.As illustrated in the following diagram, you can make the application accessibleto multiple VPCs. This accessibility can help inmulti-tenant services scenarios where your service is consumed by multiple consumer VPCs even if theirIP address ranges overlap.
IP address overlap is one of most common issues when integrating applicationsthat reside in different environments. ThePrivate Service Connect connection in the following diagram helps toavoid the IP address overlap issue. It does so without requiring provisioning ormanaging any additional networking components, like Cloud NAT or an NVA, toperform the IP address translation. For an example configuration, seePublish a hybrid service by using Private Service Connect.
The design has the following advantages:
- Avoids potential shared scaling dependencies and complex manageabilityat scale.
- Improves security by providing fine-grained connectivity control.
- Reduces IP address coordination between the producer and consumer of theservice and the remote external environment.
The design approach in the preceding diagram can expand at later stages tointegrate Apigee as the API platform by using the networkingdesign options discussed earlier, including thePrivate Service Connect option.
You can make the Private Service Connect endpoint accessible fromother regions by usingPrivate Service Connect global access.
The client connecting to the Private Service Connect endpoint canbe in the same region as the endpoint or in a different region. This approachmight be used to provide high availability across services hosted in multipleregions, or to access services available in a single region from other regions.When a Private Service Connect endpoint is accessedby resources hosted in other regions,inter-regional outbound charges apply to the traffic destined to endpoints with global access.
Note: To achieve distributed wellness checks and to facilitate connectingmultiple VPCs to on-premises environments over multiple hybrid connections,chain an internal Application Load Balancer with an external Application Load Balancer. Formore information, seeExplicit Chaining of Google Cloud L7 Load Balancers with PSC.Best practices
- ConsideringApigee and Apigee Hybrid as your API platform solution offersseveral benefits. It provides a proxy layer, and an abstraction or facade,for your backend service APIs combined with security capabilities, ratelimiting, quotas, and analytics.
- Use Apigee Adapter for Envoy with anApigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture.
- VPCs and project design in Google Cloud should be driven by yourresource hierarchy and your secure communication model requirements.
- When APIs with API gateways are used, you should also use an IP addressallowlist. An allowlist limits communications to the specific IP addresssources and destinations of the API consumers and API gateways that mightbe hosted in different environments.
- UseVPC firewall rules orfirewall policies to control access to Private Service Connect resourcesthrough the Private Service Connect endpoint.
- If an application is exposed externally through an application loadbalancer, consider usingGoogle Cloud Armor as an extra layer of security to protect against DDoS and application layersecurity threats.
If instances require internet access, useCloud NAT in the application (consumer) VPC to allow workloads to access theinternet. Doing so lets you avoid assigning VM instances with externalpublic IP addresses in systems that are deployed behind an API gateway or aload balancer.
- For outbound web traffic, you can use Google CloudSecure Web Proxy.The proxy offersseveral benefits.
Review thegeneral best practices for hybrid and multicloud networking patterns.
Gated ingress
The architecture of thegated ingress pattern is based on exposing selectAPIs of workloads running in Google Cloud to the private computing environmentwithout exposing them to the public internet. This pattern is the counterpart tothegated egress pattern and is well suited foredge hybrid,tiered hybrid,andpartitioned multicloud scenarios.
Like with thegated egress pattern, you can facilitate this limited exposurethrough an API gateway or load balancer thatserves as a facade for existing workloads or services. Doing so makes it accessible to privatecomputing environments, on-premises environments, or on other cloud environment,as follows:
- Workloads that you deploy in the private computing environment or othercloud environments are able to communicate with the API gateway or loadbalancer by using internal IP addresses. Other systems deployed inGoogle Cloud can't be reached.
- Communication from Google Cloud to the private computingenvironment or to other cloud environments isn't allowed. Traffic is onlyinitiated from the private environment or other cloud environments to theAPIs in Google Cloud.
Architecture
The following diagram shows a reference architecture that meets therequirements of the gated ingress pattern.
The description of the architecture in the preceding diagram is as follows:
- On the Google Cloud side, you deploy workloads into anapplication VPC (or multiple VPCs).
- The Google Cloud environment network extends to other computingenvironments (on-premises or on another cloud) by using hybrid ormulticloud network connectivity to facilitate the communication betweenenvironments.
- Optionally, you can use a transit VPC to accomplish the following:
- Provide additional perimeter security layers to allow access tospecific APIs outside of your application VPC.
- Route traffic to the IP addresses of the APIs. You can createVPC firewall rules to prevent some sources from accessing certain APIsthrough an endpoint.
- Inspect Layer 7 traffic at the transit VPC by integrating anetwork virtual appliance (NVA).
- Access APIs through an API gateway or a load balancer (proxy orapplication load balancer) to provide a proxy layer, and an abstractionlayer or facade for your service APIs. If you need to distribute trafficacross multiple API gateway instances, you could use aninternal passthrough Network Load Balancer.
- Provide limited and fine-grained access to a publishedservice through a Private Service Connect endpoint by using a load balancer through Private Service Connectto expose an application or service.
- All environments should use an overlap-free RFC 1918 IP address space.
The following diagram illustrates the design of this pattern usingApigee as the API platform.
In the preceding diagram, using Apigee as the API platform provides thefollowing features and capabilities to enable thegated ingress pattern:
- Gateway or proxy functionality
- Security capabilities
- Rate limiting
- Analytics
In the design:
- The northbound networking connectivity (for traffic coming from otherenvironments) passes through a Private Service Connectendpoint in your application VPC that'sassociated with the Apigee VPC.
- At the application VPC, an internal load balancer is used to expose theapplication APIs through a Private Service Connect endpointpresented in the Apigee VPC. For more information, seeArchitecture with VPC peering disabled.
Configure firewall rules and traffic filtering at the application VPC.Doing so provides fine-grained and controlled access. It also helps stopsystems from directly reaching your applications without passing throughthe Private Service Connect endpoint and API gateway.
Also, you can restrict the advertisement of the internal IPaddress subnet of the backend workload in the application VPC to theon-premises network to avoid direct reachability without passingthrough the Private Service Connect endpoint and the APIgateway.
Certain security requirements might require perimeter security inspectionoutside the application VPC, including hybrid connectivity traffic. In suchcases, you can incorporate a transit VPC to implement additional securitylayers. These layers, like next generation firewalls (NGFWs) NVAs withmultiple network interfaces,or Cloud Next Generation Firewall Enterprise withintrusion prevention service (IPS), perform deep packet inspection outside of yourapplication VPC, as illustrated in the following diagram:
As illustrated in the preceding diagram:
- The northbound networking connectivity (for traffic coming from otherenvironments) passes through a separate transit VPC toward thePrivate Service Connect endpointin the transit VPC that's associated with the Apigee VPC.
- At the application VPC, an internal load balancer (ILB in the diagram) isused to expose the application through aPrivate Service Connect endpoint in the ApigeeVPC.
You can provision several endpoints in the same VPC network, as shown in thefollowing diagram. To cover differentuse cases,you can control the different possible network paths using Cloud Router andVPC firewall rules. For example, If you're connecting your on-premises networkto Google Cloud using multiple hybrid networking connections, you couldsend some traffic from on-premises to specific Google APIs or published servicesover one connection and the rest over another connection. Also, you can usePrivate Service Connect global access to provide failover options.
Variations
Thegated ingress architecture pattern can be combined with other approachesto meet different design requirements, while still considering the communicationrequirements of the pattern. The pattern offers the following options:
Expose application backends to other environments using Private Service Connect
Use a hub and spoke architecture to expose application backends to other environments
Access Google APIs from other environments
For scenarios requiring access to Google services, like Cloud Storage orBigQuery, without sending traffic over the public internet,Private Service Connect offers a solution. As shown in thefollowing diagram, itenables reachability to thesupported Google APIs and services (including Google Maps, Google Ads, andGoogle Cloud) from on-premises or other cloud environmentsthrough a hybrid network connection using the IP address of the Private Service Connect endpoint. For more information aboutaccessing Google APIs through Private Service Connect endpoints,seeAbout accessing Google APIs through endpoints.
In the preceding diagram, your on-premises network must be connected to thetransit (consumer) VPC network using either Cloud VPN tunnels or aCloud Interconnect VLAN attachment.
Google APIs can be accessed by usingendpoints orbackends.Endpoints let you target abundle of Google APIs.Backends let you target a specificregional Google API.
Note: Private Service Connect endpoints are registered withService Directory for Google APIs where you can store, manage, and publish services.Expose application backends to other environments using Private Service Connect
In specific scenarios, as highlighted by thetiered hybrid pattern, you mightneed to deploy backends in Google Cloud while maintaining frontends inprivate computing environments. While less common, this approach is applicablewhen dealing with heavyweight, monolithic frontends that might rely on legacycomponents. Or, more commonly, when managing distributed applications acrossmultiple environments, including on-premises and other clouds, that requireconnectivity to backends hosted in Google Cloud over a hybrid network.
In such an architecture, you can use a local API gateway or load balancer in theprivate on-premises environment, or other cloud environments, to directly exposethe application frontend to the public internet. UsingPrivate Service Connect in Google Cloud facilitates privateconnectivity to the backends that are exposed through aPrivate Service Connect endpoint, ideally using predefined APIs,as illustrated in the following diagram:
The design in the preceding diagram uses anApigee Hybrid deployment consisting of a management plane in Google Cloud and a runtimeplane hosted in your other environment. You can install and manage the runtimeplane on a distributed API gateway on one of the supportedKubernetes platforms in your on-premises environment or in other cloud environments. Based on yourrequirements for distributed workloads across Google Cloud and otherenvironments, you can use Apigee on Google Cloud withApigee Hybrid. For more information, seeDistributed API gateways.
Use a hub and spoke architecture to expose application backends to other environments
Exposing APIs from application backends hosted in Google Cloud acrossdifferent VPC networks might be required in certain scenarios. As illustrated inthe following diagram, a hub VPC serves as a central point of interconnectionfor the various VPCs (spokes), enabling secure communication over private hybridconnectivity. Optionally, local API gateway capabilities in other environments,such asApigee Hybrid,can be used to terminate client requests locally where the application frontendis hosted.
As illustrated in the preceding diagram:
- To provide additional NGFW Layer 7 inspection abilities, the NVA withNGFW capabilities is optionally integrated with the design. You mightrequire these abilities to comply with specific security requirements andthe security policy standards of your organization.
This design assumes that spoke VPCs don't require direct VPC to VPCcommunication.
- If spoke-to-spoke communication is required, you can use theNVA to facilitate such communication.
- If you have different backends in different VPCs, you can usePrivate Service Connect to expose these backends to the Apigee VPC.
- If VPC peering is used for the northbound and southboundconnectivity between spoke VPCs and hub VPC, you need to consider thetransitivity limitation of VPC networking over VPC peering. To overcome this limitation, youcan use any of the following options:
- To interconnect the VPCs, usean NVA.
- Where applicable, consider the Private Service Connectmodel.
- To establish connectivity between theApigee VPC and backends that are located in otherGoogle Cloud projects in the same organization withoutadditional networking components, useShared VPC.
If NVAs are required for traffic inspection—including traffic from yourother environments—the hybrid connectivity to on-premises or other cloudenvironments should be terminated on the hybrid-transit VPC.
If the design doesn't include the NVA, you can terminate the hybridconnectivity at the hub VPC.
If certain load-balancing functionalities or security capabilities arerequired, like adding Google Cloud Armor DDoS protection or WAF, you canoptionally deploy an external Application Load Balancer at the perimeter through an externalVPC before routing external client requests to the backends.
Best practices
- For situations where client requests from the internet need to bereceived locally by a frontend hosted in a private on-premises or othercloud environment, consider using Apigee Hybrid as an APIgateway solution. This approach also facilitates a seamless migration ofthe solution to a completely Google Cloud-hosted environment whilemaintaining the consistency of the API platform (Apigee).
- Use Apigee Adapter for Envoy with anApigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture.
- The design of VPCs and projects in Google Cloud should follow theresource hierarchy and secure communication model requirements, asdescribed in this guide.
- Incorporating a transit VPC into this design provides the flexibility toprovision additional perimeter security measures and hybrid connectivityoutside the workload VPC.
- Use Private Service Connect to access Google APIs andservices from on-premises environmentsor other cloud environments using the internal IP address ofthe endpoint over a hybrid connectivity network. For more information, seeAccess the endpoint from on-premises hosts.
- To help protect Google Cloud services in your projects and helpmitigate the risk of data exfiltration, use VPC Service Controls to specifyservice perimeters at the project or VPC network level.
- When needed, you canextend service perimeters to a hybrid environment over a VPN or Cloud Interconnect. For moreinformation about the benefits of service perimeters, seeOverview of VPC Service Controls.
- UseVPC firewall rules orfirewall policies to control network-level access to Private Service Connect resources through the Private Service Connect endpoint.For example,outbound firewall rules at the application (consumer) VPC can restrict access from VM instances tothe IP address or subnet of your endpoints. For more information about VPCfirewall rules in general, seeVPC firewall rules.
- When designing a solution that includes NVAs, it's important to considerthe high availability (HA) of the NVAs to avoid a single point of failurethat could block all communication. Follow the HA and redundancy design andimplementation guidance provided by your NVA vendor.
- To strengthen perimeter security and secure your API gateway that'sdeployed in the respective environment, you can optionally implement loadbalancing and web application firewall mechanisms in your other computingenvironment (hybrid or other cloud). Implement these options at theperimeter network that's directly connected to the internet.
- If instances require internet access, useCloud NAT in the application VPC to allow workloads to access the internet. Doing solets you avoid assigning VM instances with external public IP addresses insystems that are deployed behind an API gateway or a load balancer.
- For outbound web traffic, useSecure Web Proxy.The proxy offersseveral benefits.
- Review thegeneral best practices for hybrid and multicloud networking patterns.
Gated egress and gated ingress
Thegated egress and gated ingress pattern uses a combination of gated egressand gated ingress for scenarios that demand bidirectional usage of selected APIsbetween workloads. Workloads can run in Google Cloud, in private on-premisesenvironments, or in other cloud environments. In this pattern, you can use APIgateways, Private Service Connect endpoints, or load balancers toexpose specific APIs and optionally provide authentication, authorization, andAPI call audits.
The key distinction between this pattern and themeshed pattern lies in its application to scenarios that solely require bidirectional API usageor communication with specific IP address sources and destinations—for example,an application published through a Private Service Connectendpoint. Because communicationis restricted to the exposed APIs or specific IP addresses, the networks acrossthe environments don't need to align in your design. Common applicable scenariosinclude, but aren't limited to, the following:
- Mergers and acquisitions.
- Application integrations with partners.
- Integrations between applications and services of an organization withdifferent organizational units that manage their own applications and hostthem in different environments.
The communication works as follows:
- Workloads that you deploy in Google Cloud can communicate withthe API gateway (or specific destination IP addresses) by using internal IPaddresses. Other systems deployed in the private computing environmentcan't be reached.
- Conversely, workloads that you deploy in other computing environmentscan communicate with the Google Cloud-side API gateway (or a specificpublished endpoint IP address) by using internal IP addresses. Othersystems deployed in Google Cloud can't be reached.
Architecture
The following diagram shows a reference architecture for the gated egress andgated ingress pattern:
The design approach in the preceding diagram has the following elements:
- On the Google Cloud side, you deploy workloads in a VPC (orshared VPC) without exposing them directly to the internet.
- The Google Cloud environment network is extended to othercomputing environments. That environment can be on-premises or on anothercloud. To extend the environment, use a suitablehybrid and multicloud connectivity communication pattern to facilitate the communication between environments so they can useinternal IP addresses.
- Optionally, by enabling access to specific target IP addresses, you canuse a transit VPC to help add a perimeter security layer outside of yourapplication VPC.
- You can use Cloud Next Generation Firewall or network virtual appliances(NVAs) with next generation firewalls (NGFWs) at the transit VPC toinspect traffic and to allow or prohibit access to certain APIs fromspecific sources before reaching your application VPC.
- APIs should be accessed through an API gateway or a load balancer toprovide a proxy layer, and an abstraction or facade for your service APIs.
- For applications consumed as APIs, you can also usePrivate Service Connect to provide aninternal IP address for the published application.
- All environments use overlap-free RFC 1918 IP address space.
A common application of this pattern involves deploying application backends(or a subset of application backends) in Google Cloud while hosting otherbackend and frontend components in on-premises environments or in other clouds(tiered hybrid pattern orpartitioned multicloud pattern).As applications evolve and migrate to the cloud, dependencies andpreferences for specific cloud services often emerge.
Sometimes these dependencies and preferences lead to the distribution ofapplications and backends across different cloud providers. Also, someapplications might be built with a combination of resources and servicesdistributed across on-premises environments and multiple cloud environments.
For distributed applications, the capabilities ofexternal Cloud Load Balancing hybrid and multicloud connectivity can be used to terminate user requests and route them to frontends or backendsin other environments. This routing occurs over a hybrid network connection, asillustrated in the following diagram. This integration enables the gradualdistribution of application components across different environments. Requestsfrom the frontend to backend services hosted in Google Cloud communicatesecurely over the established hybrid network connection facilitated by aninternal load balancer (ILB in the diagram).
Using the Google Cloud design in the preceding diagram helps with thefollowing:
- Facilitates two-way communication between Google Cloud,on-premises, and other cloud environments using predefined APIs on bothsides that align with the communication model of this pattern.
- To provideglobal frontends for internet-facing applications with distributed application components (frontends or backends), and toaccomplish the following goals, you can use the advanced loadbalancing and security capabilities of Google Cloud distributed atpoints of presence (PoPs):
- Reduce capital expenses and simplify operations by usingserverless managed services.
- Optimize connections to application backends globally for speedand latency.
- Google CloudCross-Cloud Network enables multicloud communication between application components overoptimal private connections.
- Cache high demand static content and improve applicationperformance for applications using global Cloud Load Balancingby providing access to Cloud CDN.
- Secure the global frontends of the internet facing applicationsby using Google Cloud Armor capabilities that provideglobally distributed web application firewall (WAF) and DDoS mitigation services.
- Optionally, you can incorporate Private Service Connect intoyour design. Doing so enablesprivate, fine-grained access to Google Cloud service APIs or yourpublished services from other environmentswithout traversing the public internet.
Variations
Thegated egress and gated ingress architecture patterns can be combined withother approaches to meet different design requirements, while still consideringthe communication requirements of this pattern. The patterns offer the followingoptions:
- Distributed API gateways
- Bidirectional API communication using Private Service Connect
- Bidirectional communication using Private Service Connect endpoints and interfaces
Distributed API gateways
In scenarios like the one based on thepartitioned multicloud pattern,applications (or application components) can be built in different cloudenvironments—including a private on-premises environment. The common requirementis to route client requests to the application frontend directly to theenvironment where the application (or the frontend component) is hosted. Thiskind of communication requires a local load balancer or an API gateway.These applications and their components might also require specific API platformcapabilities for integration.
The following diagram illustrates how Apigee andApigee Hybrid are designed to address such requirements with a localized API gateway in eachenvironment. API platform management is centralized in Google Cloud. Thisdesign helps to enforce strict access control measures where only pre-approvedIP addresses (target and destination APIs orPrivate Service Connect endpoint IP addresses) can communicatebetween Google Cloud and the other environments.
The following list describes the two distinctcommunication paths in the preceding diagram that use ApigeeAPI gateway:
- Client requests arrive at the application frontend directly in theenvironment that hosts the application (or the frontend component).
- API gateways and proxies within each environment handle client andapplication API requests in different directions across multipleenvironments.
- The API gateway functionality in Google Cloud(Apigee) exposes the application (frontend or backend)components that are hosted in Google Cloud.
- The API gateway functionality in another environment(Hybrid) exposes the application frontend (orbackend) components that are hosted in that environment.
Optionally, you can consider using a transit VPC. A transit VPC can provideflexibility to separate concerns and to perform security inspection and hybridconnectivity in a separate VPC network. From an IP address reachabilitystandpoint, a transit VPC (where the hybrid connectivity is attached)facilitates the following requirements to maintain end-to-end reachability:
- The IP addresses for target APIs need to be advertised to the otherenvironments where clients/requesters are hosted.
- The IP addresses for the hosts that need to communicate with the targetAPIs have to be advertised to the environment where the target APIresides—for example, the IP addresses of the API requester (the client). Theexception is when communication occurs through a load balancer, proxy,Private Service Connect endpoint, or NAT instance.
To extend connectivity to the remote environment, this design uses direct VPCpeering withcustomer route exchange capability. The design lets specific API requests that originate from workloadshosted within the Google Cloud application VPC to route through the transit VPC.Alternatively, you can use a Private Service Connect endpoint inthe application VPC that's associated with a loadbalancer with a hybrid network endpoint group backend in the transit VPC. Thatsetup is described in the next section: Bidirectional API communication usingPrivate Service Connect.
Bidirectional API communication using Private Service Connect
Sometimes, enterprises might not need to use an API gateway (likeApigee) immediately, or might want to add it later.However, there might be business requirements toenable communication and integration between certain applications in differentenvironments. For example, if your company acquired another company, you mightneed to expose certain applications to that company. They might need to exposeapplications to your company. Both companies might each have their own workloadshosted in different environments (Google Cloud, on-premises, or in otherclouds), and must avoid IP address overlap. In such cases, you can usePrivate Service Connect to facilitate effective communication.
For applications consumed as APIs, you can also usePrivate Service Connect to provide a private addressfor the published applications, enabling secure access within the privatenetwork acrossregions andover hybrid connectivity. This abstractionfacilitates the integration of resources from diverse clouds and on-premisesenvironments over a hybrid and multicloud connectivity model. It also enablesthe assembly of applications across multicloud and on-premises environments.This can satisfy different communication requirements, like integrating secureapplications where an API gateway isn't used or isn't planned to be used.
By using Private Service Connect with Cloud Load Balancing, as shown in the followingdiagram, you can achieve two distinct communication paths. Each path isinitiated from a different direction for a separate connectivity purpose,ideally through API calls.
- All the design considerations and recommendations of Private Service Connect discussed inthis guide apply to this design.
- If additional Layer 7 inspection is required, you can integrate NVAswith this design (at the transit VPC).
- This design can be used with or without API gateways.
The two connectivity paths depicted in the preceding diagram representindependent connections and don't illustrate two-way communication of a singleconnection or flow.
Bidirectional communication using Private Service Connect endpoints and interfaces
As discussed in thegated ingress pattern,one of the options to enable client-service communication is by using aPrivate Service Connect endpoint to expose a service in a producerVPC to a consumer VPC. Thatconnectivity can be extended to an on-premises environment or even another cloudprovider environment over a hybrid connectivity. However, in some scenarios, thehosted service can also require private communication.
To access a certain service, like retrieving data from data sources that can behosted within the consumer VPC or outside it, this private communication can bebetween the application (producer) VPC and a remote environment, such as anon-premises environment.
In such a scenario,Private Service Connect interfaces enable a service producer VM instance to access a consumer's network.It does so by sharing a network interface, while still maintaining theseparation of producer and consumer roles. With this network interface in theconsumer VPC, the application VM can access consumer resources as if theyresided locally in the producer VPC.
A Private Service Connect interface is a network interfaceattached to the consumer (transit) VPC. It's possible to reach externaldestinations that are reachable from the consumer (transit) VPC where thePrivate Service Connect interface is attached.Therefore, this connection can be extended to an external environment over ahybrid connectivity such as an on-premises environment, as illustrated in thefollowing diagram:
If the consumer VPC is an external organization or entity, like a third-partyorganization, typically you won't have the ability to secure the communicationto the Private Service Connect interface in the consumer VPC. Insuch a scenario, you can define security policies in the guest OS of thePrivate Service Connect interface VM. For more information,seeConfigure security for Private Service Connect interfaces.Or, you might consider an alternative approach if it doesn't comply with thesecurity compliance or standards of your organization.
Best practices
For situations where client requests from the internet need to bereceived locally by a frontend hosted in a private on-premises or othercloud environment, consider using Hybrid as an APIgateway solution.
- This approach also facilitates a migration of the solution to afully Google Cloud-hosted environment while maintaining theconsistency of the API platform (Apigee).
To minimize latency and optimize costs for high volumes of outbound datatransfers to your other environments when those environments are inlong-term or permanent hybrid or multicloud setups, consider the following:
- Use Cloud Interconnect or Cross-Cloud Interconnect.
- To terminate user connections at the targeted frontend in theappropriate environment, useHybrid.
Where applicable to your requirements and the architecture, useApigee Adapter for Envoy with aHybrid deployment with Kubernetes.
Before designing the connectivity and routing paths, you first need toidentify what traffic or API requests need to be directed to a local orremote API gateway, along with the source and destination environments.
Use VPC Service Controls to protect Google Cloud services in yourprojects and to mitigate the risk of data exfiltration, by specifyingservice perimeters at the project or VPC network level.
- You canextend service perimeters to a hybrid environment over an authorized VPN orCloud Interconnect. For more information about the benefits ofservice perimeters, seeOverview of VPC Service Controls.
UseVirtual Private Cloud (VPC) firewall rules orfirewall policies to control network-level access to Private Service Connectresources through the Private Service Connect endpoint.For example,outbound firewall rules at the application (consumer) VPC can restrict access from VM instances tothe IP address or subnet of your endpoints.
When using a Private Service Connect interface, you mustprotect the communication to theinterface by configuring security for thePrivate Service Connect interface.
If a workload in a private subnet requires internet access, useCloud NAT to avoid assigning an external IP address to the workload and exposing it tothe public internet.
- For outbound web traffic, useSecure Web Proxy.The proxy offersseveral benefits.
Review thegeneral best practices for hybrid and multicloud networking patterns.
Handover patterns
With thehandover pattern, the architecture is based on usingGoogle Cloud-provided storage services to connect a private computingenvironment to projects in Google Cloud. This pattern applies primarily tosetups that follow theanalytics hybrid multicloud architecture pattern,where:
- Workloads that are running in a private computing environment or inanother cloud upload data to shared storage locations. Depending on usecases, uploads might happen in bulk or in smaller increments.
- Google Cloud-hosted workloads or other Google services (dataanalytics and artificial intelligence services, for example) consume datafrom the shared storage locations and process it in a streaming or batchfashion.
Architecture
The following diagram shows a reference architecture for the handoverpattern.
The preceding architecture diagram shows the following workflows:
- On the Google Cloud side, you deploy workloads into anapplication VPC. These workloads can include data processing, analytics,and analytics-related frontend applications.
- To securely expose frontend applications to users, you can useCloud Load Balancing or API Gateway.
- A set of Cloud Storage buckets or Pub/Sub queues uploads datafrom the private computing environment and makes it available for furtherprocessing by workloads deployed in Google Cloud. UsingIdentity and Access Management (IAM) policies,you can restrict access to trusted workloads.
- UseVPC Service Controls to restrict access to services and to minimize unwarranted dataexfiltration risks from Google Cloud services.
- In this architecture, communication with Cloud Storage buckets,or Pub/Sub, is conducted over public networks, or throughprivate connectivity using VPN, Cloud Interconnect, orCross-Cloud Interconnect. Typically, the decision on how to connectdepends on several aspects, such as the following:
- Expected traffic volume
- Whether it's a temporary or permanent setup
- Security and compliance requirements
Variation
The design options outlined in thegated ingress pattern,which uses Private Service Connect endpoints for Google APIs, can alsobe applied to this pattern.Specifically, it provides access to Cloud Storage, BigQuery,and other Google Service APIs. This approach requires private IP addressing overa hybrid and multicloud network connection such as VPN, Cloud Interconnectand Cross-Cloud Interconnect.
Best practices
- Lock down access to Cloud Storage buckets andPub/Sub topics.
- When applicable, use cloud-first, integrated data movement solutionslike the Google Cloudsuite of solutions.To meet your use case needs, these solutions are designed to efficientlymove, integrate, and transform data.
Assess the different factors that influence the data transfer options,such as cost, expected transfer time, and security. For moreinformation, seeEvaluating your transfer options.
To minimize latency and prevent high-volume data transfer and movement overthe public internet, consider using Cloud Interconnect orCross-Cloud Interconnect, including accessingPrivate Service Connect endpoints within your Virtual Private Cloud forGoogle APIs.
To protect Google Cloud services in your projects and to mitigatethe risk of data exfiltration, use VPC Service Controls. These servicecontrols can specify service perimeters at the project or VPC network level.
- You canextend service perimeters to a hybrid environment over an authorized VPN orCloud Interconnect. For more information about the benefits ofservice perimeters, seeOverview of VPC Service Controls.
Communicate with publicly published data analytics workloads that arehosted on VM instances through an API gateway, a load balancer, or avirtual network appliance. Use one of these communication methods for addedsecurity and to avoid making these instances directly reachable from theinternet.
If internet access is required,Cloud NAT can be used in the same VPC to handle outbound traffic from the instancesto the public internet.
Review thegeneral best practices for hybrid and multicloud networking topologies.
General best practices
When designing and onboarding cloud identities, resource hierarchy, and landingzone networks, consider the design recommendations inLanding zone design in Google Cloud and the Google Cloud security best practices covered in theenterprise foundations blueprint.Validate your selected design against the following documents:
- Best practices and reference architectures for VPC design
- Decide a resource hierarchy for your Google Cloud landing zone
- Google Cloud Well-Architected Framework: Security, privacy, and compliance
Also, consider the following general best practices:
When choosing a hybrid or multicloud network connectivity option,consider business and application requirements such as SLAs, performance,security, cost, reliability, and bandwidth. For more information, seeChoosing a Network Connectivity product andPatterns for connecting other cloud service providers with Google Cloud.
Use shared VPC networks on Google Cloud instead of multiple VPC networks whenappropriate and aligned with yourresource hierarchy design requirements. For more information, seeDeciding whether to create multiple VPC networks.
Follow the best practices forplanning accounts and organizations.
Where applicable,establish a common identity between environments so that systems can authenticate securely acrossenvironment boundaries.
To securely expose applications to corporate users in a hybrid setup,and to choose the approach that best fits your requirements, you shouldfollow the recommended ways tointegrate Google Cloud with your identity management system.
When designing your on-premises and cloud environments, consider IPv6addressing early on, and account for which services support it. For moreinformation, seeAn Introduction to IPv6 on Google Cloud.It summarizes the services that were supported when the blog was written.
When designing, deploying, and managing your VPC firewall rules, you can:
- Useservice-account-based filtering over network-tag-based filtering if you need strict control over how firewall rules are applied to VMs.
- Usefirewall policies when you group several firewall rules, so that you can update them allat once. You can also make the policy hierarchical. For hierarchicalfirewall policy specifications and details, seeHierarchical firewall policies.
- Usegeo-location objects in firewall policy when you need to filter external IPv4 and externalIPv6 traffic based on specific geographic locations or regions.
- UseThreat Intelligence for firewall policy rules if you need to secure your network by allowing or blocking traffic basedon Threat Intelligence data, such as known malicious IP addresses orbased on public cloud IP address ranges. For example, you can allowtraffic from specific public cloud IP address ranges if your servicesneed to communicate with that public cloud only. For more information,seeBest practices for firewall rules.
You should always design your cloud and network security using amultilayer security approach by considering additional security layers,like the following:
- Google Cloud Armor
- Cloud Intrusion Detection System
- Cloud Next Generation Firewall IPS
- Threat Intelligence for firewall policy rules
These additional layers can help you filter, inspect, and monitor a wide variety of threats at the network and application layers for analysis and prevention.
When deciding where DNS resolution should be performed in a hybridsetup, we recommend using two authoritative DNS systems for your privateGoogle Cloud environment and for your on-premises resources that arehosted by existing DNS servers in your on-premises environment. For moreinformation see,Choose where DNS resolution is performed.
Where possible, always expose applications through APIs using an APIgateway or load balancer. We recommend that you consider an APIplatform like Apigee. Apigee acts as an abstraction or facadefor your backend service APIs, combined with security capabilities, ratelimiting, quotas, and analytics.
An API platform (gateway or proxy) andApplication Load Balancer aren't mutually exclusive. Sometimes, using both API gateways and loadbalancers together can provide a more robust and secure solution formanaging and distributing API traffic at scale. UsingCloud Load Balancing API gateways lets you accomplish the following:
Deliver high-performing APIs with Apigee andCloud CDN, to:
- Reduce latency
- Host APIs globally
Increase availability for peak traffic seasons
For more information, watchDelivering high-performing APIs with Apigee and Cloud CDN on YouTube.
Implement advanced traffic management.
Use Cloud Armor as DDoS protection, WAF, and network securityservice to protect your APIs.
Manage efficient load balancing across gateways in multipleregions. For more information, watchSecuring APIs and Implementing multi-region failover with PSC and Apigee.
To determine which Cloud Load Balancing product to use, you must firstdetermine what traffic type your load balancers must handle. For moreinformation, seeChoose a load balancer.
When Cloud Load Balancing is used, you should use itsapplication capacity optimization abilities where applicable. Doing so can help you address some of the capacitychallenges that can occur in globally distributed applications.
- For a deep dive on latency, seeOptimize application latency with load balancing.
While Cloud VPN encrypts traffic between environments, withCloud Interconnect you need to use either MACsec orHA VPN over Cloud Interconnect toencrypt traffic in transit at the connectivity layer. For more information, seeHow can I encrypt my traffic over Cloud Interconnect.
- You can also consider service layer encryption using TLS. Formore information, seeDecide how to meet compliance requirements for encryption in transit.
If you need more traffic volume over a VPN hybrid connectivity than asingle VPN tunnel can support, you can consider usingactive/active HA VPN routing option.
- For long-term hybrid or multicloud setups with high outbounddata transfer volumes, consider Cloud Interconnect orCross-Cloud Interconnect. Those connectivity options help tooptimize connectivity performance and might reduce outbound datatransfer charges for traffic that meets certain conditions. For moreinformation, seeCloud Interconnect pricing.
Whenconnecting to Google Cloud resources and trying to choose between Cloud Interconnect,Direct Peering, or Carrier Peering, we recommend usingCloud Interconnect, unless you need to access Google Workspaceapplications. For more information, you can compare the features ofDirect Peering with Cloud Interconnect andCarrier Peering with Cloud Interconnect.
Allow enough IP address space from your existing RFC 1918 IP addressspace to accommodate your cloud-hosted systems.
If you have technical restrictions that require you to keep your IPaddress range, you can:
Use the same internal IP addresses for your on-premisesworkloads while migrating them to Google Cloud, usinghybrid subnets.
Provision and use your own public IPv4 addresses forGoogle Cloud resources usingbring your own IP (BYOIP) to Google.
If the design of your solution requires exposing aGoogle Cloud-based application to the public internet, consider thedesign recommendations discussed inNetworking for internet-facing application delivery.
Where applicable, usePrivate Service Connect endpoints to allow workloads in Google Cloud, on-premises, or in another cloudenvironment with hybrid connectivity, to privately access Google APIs orpublished services, using internal IP addresses in a fine-grained fashion.
When using Private Service Connect, you must control thefollowing:
- Who can deploy Private Service Connect resources.
- Whether connections can be established between consumers and producers.
- Which network traffic is allowed to access those connections.
For more information, seePrivate Service Connect security.
To achieve a robust cloud setup in the context of hybrid and multicloudarchitecture:
- Perform a comprehensive assessment of the required levels ofreliability of the different applications across environments. Doing socan help you meet your objectives for availability and resilience.
- Understand the reliability capabilities anddesign principles of your cloud provider. For more information, seeGoogle Cloud infrastructure reliability.
Cloud network visibility and monitoring are essential to maintainreliable communications.Network Intelligence Center provides a single console for managing network visibility, monitoring, andtroubleshooting.
Hybrid and multicloud secure networking architecture patterns: What's next
- Learn more about the commonarchitecture patterns that you can realize by using the networking patterns discussed in thisdocument.
- Learnhow to approach hybrid and multicloud and how to choose suitable workloads.
- Learn more about Google CloudCross-Cloud Network a global network platform that is open, secure, and optimized forapplications and users across on-premises and other clouds.
- Design reliable infrastructure for your workloads in Google Cloud:Design guidance to help to protect your applications against failures atthe resource, zone, and region level.
- To learn more about designing highly available architectures inGoogle Cloud, check outpatterns for resilient and scalable apps.
- Learn more aboutthe possible connectivity options to connect GKE Enterprise cluster running in youron-premises/edge environment, to Google Cloud network along with theImpact of temporary disconnection from Google Cloud.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-01-23 UTC.