Decide the network design for your Google Cloud landing zone

When you design your landing zone, you must choose a network design that worksfor your organization. This document describes four common network designs, andhelps you choose the option that best meets your organization's requirements,and your organization's preference for centralized control or decentralizedcontrol. It's intended for network engineers, architects, and technicalpractitioners who are involved in creating the network design for yourorganization's landing zone.

This article is part of a series aboutlanding zones.

Choose your network design

The network design that you choose depends primarily on the followingfactors:

  • Centralized or decentralized control: Depending on yourorganization's preferences, you must choose one of the following:
    • Centralize control over the network including IP addressing,routing, and firewalling between different workloads.
    • Give your teams greater autonomy in running their ownenvironments and building network elements within their environmentsthemselves.
  • On-premises or hybrid cloud connectivity options: All the networkdesigns discussed in this document provide access from on-premises to cloudenvironments throughCloud VPN orCloud Interconnect. However,some designs require you to set up multiple connections in parallel, whileothers use the same connection for all workloads.
  • Security requirements: Your organization might require trafficbetween different workloads in Google Cloud to pass throughcentralized network appliances such asnext generation firewalls (NGFW). This constraint influencesyour Virtual Private Cloud (VPC) network design.
  • Scalability: Some designs might be better for your organization than others, basedon the number of workloads that you want to deploy, and the number ofvirtual machines (VMs), internal load balancers, and other resources thatthey will consume.

Decision points for network design

The following flowchart shows the decisions that you must make to choose thebest network design for your organization.

Decisions for network designs.

The preceding diagram guides you through the following questions:

  1. Do you requireLayer 7 inspection using network appliances between different workloads inGoogle Cloud?
  2. Do many of your workloads require on-premises connectivity?
    • If yes, go to decision point 4.
    • If no, proceed to the next question.
  3. Can your workloads communicate using private endpoints in a serviceproducer and consumer model?
  4. Do you want to administer firewalling and routing centrally?

This chart is intended to help you make a decision, however, it can often bethe case that multiple designs might be suitable for your organization. In theseinstances, we recommend that you choose the design that fits best with your usecase.

Network design options

The following sections describe four common design options. We recommend option1 for most use cases. The other designs discussed in this section arealternatives that apply to specific organizational edge-case requirements.

The best fit for your use case might also be a network that combines elementsfrom multiple design options discussed in this section. For example, you can useShared VPC networks in hub-and-spoke topologies for better collaboration, centralizedcontrol, and to limit the number of VPC spokes. Or, you mightdesign most workloads in a Shared VPC topology but isolate a smallnumber of workloads in separate VPC networks that only exposeservices through a few defined endpoints usingPrivate Service Connect.

Note: When the design options refer to connections to on-premises networks,you can use the same concepts for connections to other cloud service providers(CSPs).

Option 1: Shared VPC network for each environment

We recommend this network design for most use cases. This design uses separateShared VPC networks for each deployment environment that you have inGoogle Cloud (development, testing, and production). This design lets youcentrally manage network resources in a common network and provides networkisolation between the different environments.

Use this design when the following is true:

  • You want central control over firewalling and routing rules.
  • You need a simple, scalable infrastructure.
  • You need centralized IP address space management.

Avoid this design when the following is true:

  • You want developer teams to have full autonomy, including the abilityto manage their own firewall rules, routing, and peering to other team networks.
  • You need Layer 7 inspection using NGFW appliances.

The following diagram shows an example implementation of this design.

Option 1 diagram.

The preceding diagram shows the following:

By design, traffic from one environment cannot reach another environment.However, if specific workloads must communicate with each other, you can allowdata transfer through controlled channels on-premises, or you can share databetween applications with Google Cloud services likeCloud Storage orPub/Sub.We recommend that you avoid directly connecting separated environments throughVPC Network Peering, because it increases the risk of accidentally mixingdata between the environments. Using VPC Network Peering between largeenvironments also increases the risk of hittingVPC quotas around peering and peering groups.

For more information, see the following:

To implement this design option, seeCreate option 1: Shared VPC network for each environment.

Option 2: Hub-and-spoke topology with centralized appliances

This network design uses hub-and-spoke topology. A hub VPCnetwork contains a set of appliance VMs such as NGFWs that are connected to thespoke VPC networksthat contain the workloads. Traffic between the workloads, on-premises networks,or the internet is routed through appliance VMs for inspection and filtering.

Use this design when the following is true:

  • You require Layer 7 inspection between different workloads or applications.
  • You have a corporate mandate that specifies the security appliancevendor for all traffic.

Avoid this design when the following is true:

  • You don't require Layer 7 inspection for most of your workloads.
  • You want workloads on Google Cloud to not communicate at all witheach other.
  • You only need Layer 7 inspection for traffic going to on-premisesnetworks.

The following diagram shows an example implementation of this pattern.

Option 2 diagram.

The preceding diagram shows the following:

  • A production environment which includes a hub VPC network andmultiple spoke VPC networks that contain the workloads.
  • The spoke VPC networks are connected with the hubVPC network by using VPC Network Peering.
  • The hub VPC network has multiple instances of a virtualappliance in amanaged instance group. Traffic to the managed instance group goes throughan internal passthrough Network Load Balancer.
  • The spoke VPC networks communicate with each other throughthe virtualappliances by using static routes with the internal load balancer as thenext hop.
  • Cloud Interconnect connects the transit VPC networksto on-premises locations.
  • On-premises networks are connected through the sameCloud Interconnects using separate VLAN attachments.
  • The transit VPC networks are connected to a separate network interfaceon the virtual appliances, which lets you inspect and filter all traffic toand from these networks by using your appliance.
  • The development environment has the same VPC structure as theproduction environment.
  • This setup doesn't use source network address translation (SNAT). SNATisn't required because Google Cloud usessymmetric hashing. Formore information seeSymmetric hashing.

By design, traffic from one spoke network cannot reach another spoke network.However, if specific workloads must communicate with each other, you can set updirect peering between the spoke networks using VPC Network Peering, or youcan share data between applications with Google Cloud services likeCloud Storage or Pub/Sub.

To maintain low latency when the appliance communicates between workloads, theappliance must be in the sameregion as the workloads. If you use multipleregions in your cloud deployment, you can have one set of appliances and one hubVPC for each environment in each region. Alternatively, you can usenetwork tags with routes to have all instances communicate with the closest appliance.

Firewall rules can restrict the connectivity within the spoke VPC networks thatcontain workloads. Often, teams who administer the workloads also administerthese firewall rules. For central policies, you can usehierarchical firewall policies.If you require a central network team to have full control over firewall rules, consider centrallydeploying those rules in all VPC networks by using aGitOps approach.In this case, restrict the IAM permissions to only thoseadministrators who can change the firewall rules. Spoke VPCnetworks can also be Shared VPC networks if multiple teams deploy in thespokes.

In this design, we recommend that you use VPC Network Peering to connect thehub VPC network and spoke VPC networks because itadds minimum complexity.However, the maximum number of spokes is limited by the following:

  • The limit onVPC Network Peering connectionsfroma single VPC network.
  • Peering group limits such as the maximum number of forwarding rules forthe internal TCP/UDP Load Balancing for each peering group.

If you expect to reach these limits, you can connect the spoke networks throughCloud VPN. Using Cloud VPN adds extra cost and complexity andeach Cloud VPN tunnel has a bandwidth limit.

For more information, see the following:

To implement this design option, seeCreate option 2: Hub-and-spoke topology with centralized appliances.

Option 3: Hub-and-spoke topology without appliances

This network design also uses a hub-and-spoke topology, with a hubVPC network that connects to on-premises networks and spokeVPC networks that contain theworkloads. Because VPC Network Peering is non-transitive, spoke networkscannot communicate with each other directly.

Use this design when the following is true:

  • You want workloads or environments in Google Cloud to notcommunicate with each other at all using internal IP addresses, but you dowant them to share on-premises connectivity.
  • You want to give teams autonomy in managing their own firewall androuting rules.

Avoid this design when the following is true:

  • You require Layer 7 inspection between workloads.
  • You want to centrally manage routing and firewall rules.
  • You require communications from on-premises services to managed servicesthat are connected to the spokes through another VPC Network Peering,because VPC Network Peering is non-transitive.

The following diagram shows an example implementation of this design.

Option 3 diagram.

The preceding diagram shows the following:

  • A production environment which includes a hub VPC network andmultiple spoke VPC networks that contain the workloads.
  • The spoke VPC networks are connected with the hubVPC network by using VPC Network Peering.
  • Connectivity to on-premises locations passes throughCloud Interconnect connections in the hub VPCnetwork.
  • On-premises networks are connected through theCloud Interconnect instances using separate VLAN attachments.
  • The development environment has the same VPC structure as theproduction environment.

By design, traffic from one spoke network cannot reach another spoke network.However, if specific workloads must communicate with each other, you can set updirect peering between the spoke networks using VPC Network Peering, or youcan share data between applications with Google Cloud services likeCloud Storage or Pub/Sub.

This network design is often used in environments where teams act autonomouslyand there is no centralized control over firewall and routing rules. However,the scale of this design is limited by the following:

Therefore, this design is not typically used in large organizations that havemany separate workloads on Google Cloud.

As a variation to the design, you can use Cloud VPN instead ofVPC Network Peering. Cloud VPN lets you increase the number ofspokes, but adds abandwidth limit for each tunnel and increases complexity and costs. When you usecustom advertised routes,Cloud VPN also allows for transitivity between the spokes withoutrequiring you to directly connect all the spoke networks.

For more information, see the following:

To implement this design option, seeCreate option 3: Hub-and-spoke topology without appliances.

Option 4: Expose services in a consumer-producer model with Private Service Connect

In this network design, each team or workload gets their own VPCnetwork, which can also be a Shared VPC network. Each VPCnetwork is independently managed and usesPrivate Service Connect to expose all the services that need to be accessed from outside theVPC network.

Use this design when the following is true:

  • Workloads only communicate with each other and the on-premisesenvironment through defined endpoints.
  • You want teams to be independent of each other, and manage their own IPaddress space, firewalls, and routing rules.

Avoid this design when the following is true:

  • Communication between services and applications uses many differentports or channels, or ports and channels change frequently.
  • Communication between workloads uses protocols other than TCP or UDP.
  • You require Layer 7 inspection between workloads.

The following diagram shows an example implementation of this pattern.

Option 4 diagram.

The preceding diagram shows the following:

  • Separate workloads are located in separate projects and VPCnetworks.
  • A client VM in one VPC network can connect to a workload inanother VPC network through aPrivate Service Connect endpoint.
  • The endpoint is attached to aservice attachment in the VPC network where the service islocated. The service attachment can be in a different region from theendpoint if the endpoint is configured forglobal access.
  • The service attachment connects to the workload throughCloud Load Balancing.
  • Clients in the workload VPC can reach workloads that arelocated on-premises as follows:
    • The endpoint isconnected to a service attachment in a transit VPCnetwork.
    • The service attachment is connected to the on-premises networkusing Cloud Interconnect.
  • An internal Application Load Balancer is attached to the service attachment and uses ahybrid network endpoint group to balance traffic load between the endpointsthat are located on-premises.
  • On-premises clients can also reach endpoints in the transitVPC networkthat connect to service attachments in the workload VPCnetworks.

For more information, see the following:

To implement this design option, seeCreate option 4: Expose services in a consumer-producer model with Private Service Connect.

Best practices for network deployment

After you choose the best network design for your use case, we recommend implementing the following best practices:

For more information, seeBest practices and reference architectures for VPC design.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-10-31 UTC.