Networking

Last reviewed 2025-05-15 UTC

Networking is required for resources to communicate within your Google Cloudorganization and between your cloud environment and on-premises environment.This section describes the structure in the blueprint for VPC networks, IPaddress space, DNS, firewall policies, and connectivity to the on-premisesenvironment.

Network topology

The blueprint repository provides the following options for your networktopology:

  • Use separate Shared VPC networks for each environment, with nonetwork traffic directly allowed between environments.
  • Use a hub-and-spoke model that adds a hub network to connect eachenvironment in Google Cloud, with the network traffic betweenenvironments gated by a network virtual appliance (NVA).

Choose theShared VPC network for each environment topologywhen you don't want direct network connectivity between environments. Choose thehub-and-spoke network topologywhen you want to allow network connectivity between environments that isfiltered by an NVA, such as when you rely on existing tools that require adirect network path to every server in your environment.

Both topologies use Shared VPC as a principal networking constructbecause Shared VPC allows a clear separation of responsibilities.Network administrators manage network resources in a centralized host project,and workload teams deploy their own application resources and consume thenetwork resources in service projects that are attached to the host project.

Shared VPC network for each environment topology

If you require network isolation between your development, non-production, andproduction networks on Google Cloud, we recommend the Shared VPCnetwork for each environment topology. This topology allows no network trafficbetween environments.

The following diagram shows the Shared VPC network for each environmenttopology.

The blueprint VPC network.

The diagram describes the following key concepts of the Shared VPC for eachenvironment topology:

  • Each environment (production, non-production, and development) has oneShared VPC network. This diagram shows only the productionenvironment, but the same pattern is repeated for each environment.
  • Each Shared VPC network has two subnets, with each subnet in adifferentregion.
  • Connectivity with on-premises resources is enabled through four VLANattachments to the Dedicated Interconnect instance for eachShared VPC network, using four Cloud Router services (two in each region for redundancy). For more information, seeHybrid connectivity between on-premises environment and Google Cloud.

By design, this topology doesn't allow network traffic to flow directly betweenenvironments. If you do require network traffic to flow directly betweenenvironments, you must take additional steps to allow this network path. Forexample, you mightconfigure Private Service Connect endpointsto expose a service from one VPC network to another VPC network. Alternatively,you might configure your on-premises network to let traffic flow from oneGoogle Cloud environment to the on-premises environment and then toanother Google Cloud environment.

Hub-and-spoke network topology

If you deploy resources in Google Cloud that require a direct networkpath to resources in multiple environments, we recommend the hub-and-spokenetwork topology.

The hub-and-spoke topology uses several of the concepts that are part of theShared VPC network for each environment topology, but modifies thetopology to add a hub network. The following diagram shows the hub-and-spoketopology.

The example.com VPC network structure when using hub-and-spokeconnectivity based on VPC peering.

The diagram describes these key concepts of hub-and-spoke network topology:

  • This model adds a hub network, and each of the development,non-production, and production networks (spokes) are connected to thehub network through VPC Network Peering. Alternatively, if youanticipate exceedingthe quota limit,you can use anHA VPN gatewayinstead.
  • Connectivity to on-premises networks is allowed only through thehub network. All spoke networks can communicate with sharedresources in the hub network and use this path to connect to on-premisesnetworks.
  • The hub networks include an NVA for each region, deployed redundantly behindinternal Network Load Balancer instances. This NVA serves as the gateway toallow or deny traffic to communicate between spoke networks.
  • The hub network also hosts tooling that requires connectivity to allother networks. For example, you might deploy tools on VM instances forconfiguration management to the common environment.

To enable spoke-to-spoke traffic, the blueprint deploys NVAs on the hubShared VPC network that act as gateways between networks. Routes areexchanged from hub-to-spoke VPC networks throughcustom routesexchange. In this scenario,connectivity between spokes must be routed through the NVA becauseVPC Network Peering is non-transitive, and therefore, spoke VPC networkscan't exchange data with each other directly. You must configure the virtualappliances to selectively allow traffic between spokes.

Project deployment patterns

When creating new projects for workloads, you must decide how resources in thisproject connect to your existing network. The following table describes thepatterns for deploying projects that are used in the blueprint.

PatternDescriptionExample usage
Shared VPC service projects

These projects are configured as service projects to aShared VPC host project.

Use this pattern when resources in your project have thefollowing criteria:

  • Require network connectivity to the on-premises environment orresources in the same Shared VPC topology.
example_shared_vpc_project.tf
Floating projects

Floating projects are not connected to other VPC networks inyour topology.

Use this pattern when resources in your project have thefollowing criteria:

  • Don't require full mesh connectivity to an on-premises environmentor resources in the Shared VPC topology.
  • Don't require a VPC network, or you want to manage the VPC networkfor this project independently of your main VPC network topology (such aswhen you want to use an IP address range that clashes with the rangesalready in use).

You might have a scenario where you want to keep the VPC networkof a floating project separate from the main VPC network topology butalso want to expose a limited number of endpoints between networks.In this case,publish services by usingPrivate Service Connect to share network access toan individual endpoint across VPC networks without exposing theentire network.

example_floating_project.tf
Peering projects

Peering projects create their own VPC networks and peer to otherVPC networks in your topology.

Use this pattern when resources in your project have thefollowing criteria:

  • Require network connectivity in the directly peered VPC network,but don't require transitive connectivity to an on-premises environment orother VPC networks.
  • Must manage the VPC network for this project independentlyof your main network topology.

If you create peering projects, it's your responsibility toallocate non-conflicting IP address ranges and plan forpeering group quota.

example_peering_project.tf

IP address allocation

This section introduces how the blueprint architecture allocates IP addressranges. You might need to change the specific IP address ranges used based onthe IP address availability in your existing hybrid environment.

The following table provides a breakdown of the IP address space that'sallocated for the blueprint. The hub environment only applies in thehub-and-spoke topology.

PurposeRegionHub environmentDevelopment environmentNon-production environmentProduction environment
Primary subnet rangesRegion 110.8.0.0/1810.8.64.0/1810.8.128.0/1810.8.192.0/18
Region 210.9.0.0/1810.9.64.0/1810.9.128.0/1810.9.192.0/18
Unallocated10.{10-15}.0.0/1810.{10-15}.64.0/1810.{10-15}.128.0/1810.{10-15}.192.0/18
Private services accessGlobal10.16.32.0/2110.16.40.0/2110.16.48.0/2110.16.56.0/21
Private Service Connect endpointsGlobal10.17.0.5/3210.17.0.6/3210.17.0.7/3210.17.0.8/32
Proxy-only subnetsRegion 110.26.0.0/2310.26.2.0/2310.26.4.0/2310.26.6.0/23
Region 210.27.0.0/2310.27.2.0/2310.27.4.0/2310.27.6.0/23
Unallocated10.{28-33}.0.0/2310.{28-33}.2.0/2310.{28-33}.4.0/2310.{28-33}.6.0/23
Secondary subnet rangesRegion 1100.72.0.0/18100.72.64.0/18100.72.128.0/18100.72.192.0/18
Region 2100.73.0.0/18100.73.64.0/18100.73.128.0/18100.73.192.0/18
Unallocated100.{74-79}.0.0/18100.{74-79}.64.0/18100.{74-79}.128.0/18100.{74-79}.192.0/18

The preceding table demonstrates these concepts for allocating IP addressranges:

  • IP address allocation is subdivided into ranges for each combination ofpurpose, region, and environment.
  • Some resources are global and don't require subdivisions for each region.
  • By default, for regional resources, the blueprint deploys in two regions. Inaddition, there are unused IP address ranges so that you can can expand intosix additional regions.
  • The hub network is only used in the hub-and-spoke network topology,while the development, non-production, and production environments are usedin both network topologies.

The following table introduces how each type of IP address range is used.

PurposeDescription
Primary subnet rangesResources that you deploy to your VPC network, such as virtualmachine instances, use internal IP addresses from these ranges.
Private services accessSome Google Cloud services such as Cloud SQL requireyou to preallocate a subnet range forprivate services access. Theblueprint reserves a /21 range globally for each of theShared VPC networks to allocate IP addresses for servicesthat require private services access. When you create a service thatdepends on private services access, you allocate a regional /24subnet from the reserved /21 range.
Private Service ConnectThe blueprint provisions each VPC network with aPrivate Service Connectendpoint to communicate with Google Cloud APIs. This endpointlets your resources in the VPC network reach GoogleCloud APIs without relying on outbound traffic to theinternet or publicly advertised internet ranges.
Proxy-based load balancersSome types of Application Load Balancers require you topreallocateproxy-only subnets. Although theblueprint doesn't deploy Application Load Balancers that require thisrange, allocating ranges in advance helps reduce friction forworkloads when they need to request a new subnet range to enablecertain load balancer resources.
Secondary subnet rangesSome use cases, such as container-based workloads, requiresecondary ranges. Although the blueprintdoesn't deploy services that require secondary ranges, it allocates ranges fromthe RFC 6598 IP address space for secondary ranges. Alternatively, if youstruggle to allocate sufficiently large CIDR blocks for these services, youmight consider deploying those services in the floating project patternintroduced in theproject deployment patternssection.

Centralized DNS setup

For DNS resolution between Google Cloud and on-premises environments, werecommend that you use a hybrid approach with two authoritative DNS systems. Inthis approach, Cloud DNS handles authoritative DNS resolution for yourGoogle Cloud environment and your existing on-premises DNS servers handleauthoritative DNS resolution for on-premises resources. Your on-premisesenvironment and Google Cloud environment perform DNS lookups betweenenvironments through forwarding requests.

The following diagram demonstrates the DNS topology across the multiple VPCnetworks that are used in the blueprint.

Cloud DNS setup for the blueprint.

The diagram describes the following components of the DNS design that isdeployed by the blueprint:

  • The DNS hub is the central point of DNSexchange between the on-premises environment and the Google Cloudenvironment. DNS forwarding uses the sameDedicated Interconnect instances and Cloud Routersthat are already configured in your network topology.
    • In the Shared VPC network for each environment topology, the DNSproject is the same as the production Shared VPC host project.
    • In the hub-and-spoke topology, the DNS hub project is the same as the hubShared VPC host project.
  • Servers in each Shared VPC network can resolve DNS records fromother Shared VPC networks throughDNS forwarding, which isconfigured between Cloud DNS in each Shared VPC hostproject and the DNS hub.
  • On-premises servers can resolve DNS records in Google Cloudenvironments usingDNS server policiesthat allow queries from on-premises servers. The blueprint configures aninbound server policy in the DNS hub to allocate IP addresses, and theon-premises DNS servers forward requests to these addresses. All DNSrequests to Google Cloud reach the DNS hub first, which then resolvesrecords from DNS peers.
  • Servers in Google Cloud can resolve DNS records in the on-premisesenvironment usingforwarding zonesthat query on-premises servers. All DNS requests to the on-premisesenvironment originate from the DNS hub. The DNS request source is35.199.192.0/19.

Firewall policies

Google Cloud has multiplefirewall policytypes. Hierarchical firewall policies are enforced at the organization or folderlevel to inherit firewall policy rules consistently across all resources in thehierarchy. In addition, you can configure network firewall policies for each VPCnetwork. The blueprint combines these firewall policies to enforce commonconfigurations across all environments using Hierarchical firewall policies andto enforce more specific configurations at each individual VPC network usingnetwork firewall policies.

The blueprint doesn't uselegacy VPC firewall rules.We recommend using only firewall policies and avoid mixing use with legacy VPCfirewall rules.

Hierarchical firewall policies

The blueprint defines a singlehierarchical firewall policyand attaches the policy to each of the production, non-production, development,bootstrap, and common folders. This hierarchical firewall policycontains the rules that should be enforced broadly across all environments, anddelegates the evaluation of more granular rules to the network firewall policyfor each individual environment.

The following table describes the hierarchical firewall policy rules deployedby the blueprint.

Rule descriptionDirection of trafficFilter (IPv4 range)Protocols and portsAction
Delegate the evaluation of inbound traffic from RFC 1918 to lowerlevels in the hierarchy.Ingress

192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12

allGo to next
Delegate the evaluation of outbound traffic to RFC 1918 to lowerlevels in the hierarchy.Egress

192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12

allGo to next
IAP for TCPforwardingIngress

35.235.240.0/20

tcp:22,3389Allow
Windows server activationEgress

35.190.247.13/32

tcp:1688Allow
Health checks forCloud Load BalancingIngress

130.211.0.0/22, 35.191.0.0/16, 209.85.152.0/22, 209.85.204.0/22

tcp:80,443Allow

Network firewall policies

The blueprint configures anetwork firewall policyfor each network. Each network firewall policy starts with a minimum set ofrules that allow access to Google Cloud services and deny egress to allother IP addresses.

In the hub-and-spoke model, the network firewall policies contain additionalrules to allow communication between spokes. The network firewall policy allowsoutbound traffic from one to the hub or another spoke, and allows inboundtraffic from the NVA in the hub network.

The following table describes the rules in the global network firewall policydeployed for each VPC network in the blueprint.

Rule descriptionDirection of trafficFilterProtocols and ports
Allow outbound traffic to Google Cloud APIs.EgressThe Private Service Connect endpoint that isconfigured for each individual network. SeePrivateaccess to Google APIs.tcp:443
Deny outbound traffic not matched by other rules.Egressallall

Allow outbound traffic from one spoke to another spoke (for hub-and-spoke model only).

EgressThe aggregate of all IP addresses used in the hub-and-spoketopology. Traffic that leaves a spoke VPC is routed to the NVA in thehub network first.all

Allow inbound traffic to a spoke from the NVA in the hub network (for hub-and-spoke model only).

IngressTraffic originating from the NVAs in the hub network.all

When you first deploy the blueprint, a VM instance in a VPC network cancommunicate with Google Cloud services, but not to other infrastructureresources in the same VPC network. To allow VM instances to communicate, youmust add additional rules to your network firewall policy andtagsthat explicitly allow the VM instances to communicate. Tags are added to VMinstances, and traffic is evaluated against those tags. Tags additionally haveIAM controls so that you can define them centrally and delegate their use toother teams.

Note: All references to tags in this document refertotags with IAM controls. We don'trecommend VPC firewall rules withlegacy networktags.

The following diagram shows an example of how you can add custom tags andnetwork firewall policy rules to let workloads communicate inside a VPCnetwork.

Firewall rules in example.com.

The diagram demonstrates the following concepts of this example:

  • The network firewall policy contains Rule 1 that denies outboundtraffic from all sources at priority 65530.
  • The network firewall policy contains Rule 2 that allows inbound traffic frominstances with theservice=frontend tag to instances with theservice=backend tag at priority 999.
  • The instance-2 VM can receive traffic from instance-1 because thetraffic matches the tags allowed by Rule 2. Rule 2 is matched before Rule 1is evaluated, based on the priority value.
  • The instance-3 VM doesn't receive traffic. The only firewall policy rulethat matches this traffic is Rule 1, so outbound traffic from instance-1 isdenied.

Private access to Google Cloud APIs

To let resources in your VPC networks or on-premises environment reachGoogle Cloud services, we recommend private connectivity instead ofoutbound internet traffic to public API endpoints. The blueprint configuresPrivate Google Access on every subnet,creates internal endpoints withPrivate Service Connectto communicate with Google Cloud services, and configures firewallpolicies and DNS records to allow traffic to those endpoints. Used together,these controls allow a private path to Google Cloud services, withoutrelying on internet outbound traffic or publicly advertised internet ranges.

The blueprint configures Private Service Connect endpoints withtheAPI bundlecalledvpc-sc, which allows access to the Google Cloudservicessupported by the restrictedVIP. This control helpsmitigate exfiltration risk by preventing access to other Google APIs not relatedto Google Cloud. This control is also a prerequisite step for enablingVPC Service Controls. For more information about the optional steps toenable VPC Service Controls, seeprotect your resources withVPC Service Controls.

The following table describes the Private Service Connectendpoints created for each network.

EnvironmentAPI bundlePrivate Service Connect endpoint IP address
Commonvpc-sc10.17.0.5/32
Developmentvpc-sc10.17.0.6/32
Non-productionvpc-sc10.17.0.7/32
Productionvpc-sc10.17.0.8/32

To ensure that traffic for Google Cloud services has a DNS lookup to thecorrect endpoint, the blueprint configures private DNS zones for each VPCnetwork. The following table describes these private DNS zones.

Private zone nameDNS nameRecord typeData
googleapis.com.*.googleapis.com.CNAMErestricted.googleapis.com.
restricted.googleapis.comAThe Private Service Connect endpoint IP addressfor that VPC network.
gcr.io.*.gcr.ioCNAMEgcr.io.
gcr.ioAThe Private Service Connect endpoint IP addressfor that VPC network.
pkg.dev.*.pkg.dev.CNAMEpkg.dev.
pkg.dev.AThe Private Service Connect endpoint IP addressfor that VPC network.

The blueprint has additional configurations to enforce that thesePrivate Service Connect endpoints are used consistently. EachShared VPC network also enforces the following:

  • A network firewall policy rule that allows outbound traffic from allsources to the IP address of the Private Service Connectendpoint on TCP:443.
  • A network firewall policy rule that denies outbound traffic to0.0.0.0/0, which includes thedefault domainsthat are used for access to Google Cloud services.

Internet connectivity

The blueprint doesn't allow inbound or outbound traffic between its VPCnetworks and the internet. For workloads that require internet connectivity, youmust take additional steps to design the access paths required.

For workloads that require outbound traffic to the internet, we recommend thatyou manage outbound traffic throughCloud NATto allow outbound traffic without unsolicited inbound connections, or throughSecure Web Proxyfor more granular control to allow outbound traffic to trusted web servicesonly.

For workloads that require inbound traffic from the internet, we recommend thatyou design your workload withCloud Load BalancingandGoogle Cloud Armorto benefit from DDoS and WAF protections.

We don't recommend that you design workloads that allow direct connectivitybetween the internet and a VM using an external IP address on the VM.

Hybrid connectivity between an on-premises environment and Google Cloud

To establish connectivity between the on-premises environment andGoogle Cloud, we recommend that you useDedicated Interconnectto maximize security and reliability. A Dedicated Interconnectconnection is a direct link between your on-premises network andGoogle Cloud.

The following diagram introduces hybrid connectivity between the on-premisesenvironment and a Google Virtual Private Cloud network.

The hybrid connection structure.

The diagram describes the following components of the pattern for99.99% availability for Dedicated Interconnect:

  • Four Dedicated Interconnect connections, with two connectionsin one metropolitan area (metro) and two connections in another metro.Within each metro, there are two distinct zones within the colocationfacility.
  • The connections are divided into two pairs, with each pair connected toa separate on-premises data center.
  • VLAN attachmentsare used to connect each Dedicated Interconnect instance toCloud Routersthat are attached to the Shared VPC topology.
  • Each Shared VPC network has four Cloud Routers, two ineach region, with the dynamic routing mode set toglobal so that everyCloud Router can announce all subnets, independent of region.

With global dynamic routing, Cloud Router advertises routes to allsubnets in the VPC network. Cloud Router advertises routes to remotesubnets (subnets outside of the Cloud Router's region) with a lowerpriority compared to local subnets (subnets that are in theCloud Router's region). Optionally, you can changeadvertised prefixes and prioritieswhen you configure the BGP session for a Cloud Router.

Traffic from Google Cloud to an on-premises environment uses theCloud Router closest to the cloud resources. Within a single region,multiple routes to on-premises networks have the same multi-exit discriminator(MED) value, and Google Cloud usesequal cost multi-path (ECMP)routing to distribute outbound traffic between all possible routes.

On-premises configuration changes

To configure connectivity between the on-premises environment andGoogle Cloud, you must configure additional changes in your on-premisesenvironment. The Terraform code in the blueprint automatically configuresGoogle Cloud resources but doesn't modify any of your on-premises networkresources.

Some of the components for hybrid connectivity from your on-premises environmenttoGoogle Cloud are automatically enabled by the blueprint, including thefollowing:

  • Cloud DNS is configured with DNS forwarding between allShared VPC networks to a single hub, as described inDNS setup.A Cloud DNS server policy is configured withinbound forwarder IP addresses.
  • Cloud Router is configured to export routes for all subnets andcustom routes for the IP addresses used by thePrivate Service Connect endpoints.

To enable hybrid connectivity, you must take the following additional steps:

  1. Order a Dedicated Interconnect connection.
  2. Configure on-premises routers and firewalls to allow outbound traffic tothe internal IP address space defined inIP address space allocation.
  3. Configure your on-premises DNS servers to forward DNS lookups bound forGoogle Cloud to theinbound forwarder IP addressesthat is already configured by the blueprint.
  4. Configure your on-premises DNS servers, firewalls, and routers to acceptDNS queries from the Cloud DNSforwarding zone(35.199.192.0/19).
  5. Configure on-premise DNS servers to respond to queries from on-premiseshosts to Google Cloud services with the IP addresses defined inprivate access toCloud APIs.
  6. For encryption in transit over the Dedicated Interconnectconnection, configureMACsec for Cloud Interconnector configureHA VPN over Cloud Interconnectfor IPsec encryption.

For more information, seePrivate Google Access for on-premiseshosts.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-05-15 UTC.