Configure clusters with Shared VPC

This guide shows how to create two Google Kubernetes Engine (GKE) clusters, inseparate projects, that use aShared VPC. For generalinformation about GKE networking, visit theNetwork overview.

The examples in this guide configure the infrastructure for a two-tier webapplication, as described inShared VPC overview.

Why use Shared VPC with GKE

WithShared VPC, you designate one project asthe host project, and you can attach other projects, called service projects, tothe host project. You create networks, subnets, secondary address ranges,firewall rules, and other network resources in the host project. Then you shareselected subnets, including secondary ranges, with the service projects.Components running in a service project can use the Shared VPC tocommunicate with components running in the other service projects.

You can use Shared VPC withAutopilot clustersand withzonal andregionalStandard clusters.

Standard clusters that use Shared VPC cannot uselegacy networks and must haveVPC-native traffic routingenabled. Autopilot clusters always enable VPC-nativetraffic routing.

You can configure Shared VPC when you create a new cluster.GKE does not support converting existing clusters to theShared VPC model.

With Shared VPC, certain quotas and limits apply. For example there is a quotafor the number of networks in a project, and there is a limit on the number ofservice projects that can be attached to a host project. For details, seeQuotas and limits.

Before you begin

Before you start to set up a cluster with Shared VPC:

Before you perform the exercises in this guide:

  • Choose one of your projects to be the host project.
  • Choose two of your projects to be service projects.

Each project has a name, an ID, and a number. In some cases, the name and the IDare the same. This guide uses the following friendly names and placeholders torefer to your projects:

Friendly nameProject ID
placeholder
Project number
placeholder
Your host projectHOST_PROJECT_IDHOST_PROJECT_NUM
Your first service projectSERVICE_PROJECT_1_IDSERVICE_PROJECT_1_NUM
Your second service projectSERVICE_PROJECT_2_IDSERVICE_PROJECT_2_NUM
Note: Your first and second service projects don't come in order. The terms"first" and "second" are used only to distinguish one project from the other.

Find your project IDs and numbers

You can find your project ID and numbers by using the gcloud CLI orthe Google Cloud console.

Console

  1. Go to theHome page of the Google Cloud console.

    Go to the Home page

  2. In the project picker, select the project that you have chosen to bethe host project.

  3. UnderProject info, you can see the project name, project ID,and project number. Make a note of the ID and number for later.

  4. Do the same for each of the projects that you have chosen to beservice projects.

gcloud

List your projects with the following command:

gcloudprojectslist

The output shows your project names, IDs and numbers. Make a note of the IDand number for later:

PROJECT_ID        NAME        PROJECT_NUMBERhost-123          host        1027xxxxxxxxsrv-1-456         srv-1       4964xxxxxxxxsrv-2-789         srv-2       4559xxxxxxxx

Enable the GKE API in your projects

Before you continue with the exercises in this guide, make sure that theGKE API is enabled in all three of your projects. Enabling the APIin a project creates a GKE service account for the project.To perform the remaining tasks in this guide, each of your projects must have aGKE service account.

You can enable the GKE API using the Google Cloud consoleor the Google Cloud CLI.

Console

  1. Go to theAPIs & Services page in the Google Cloud console.

    Go to APIs & Services

  2. In the project picker, select the project that you have chosen tobe the host project.

  3. IfKubernetes Engine API is in the list of APIs, it isalready enabled, and you don't need to do anything. If it is not in thelist, clickEnable APIs and Services. Search forKubernetes Engine API. Click theKubernetes Engine API card, andclickEnable.

  4. Repeat these steps for each projects that you have chosen to be aservice project. Each operation may take some time to complete.

gcloud

Enable the GKE API for your three projects. Each operationmay take some time to complete:

gcloudservicesenablecontainer.googleapis.com--projectHOST_PROJECT_IDgcloudservicesenablecontainer.googleapis.com--projectSERVICE_PROJECT_1_IDgcloudservicesenablecontainer.googleapis.com--projectSERVICE_PROJECT_2_ID

Create a network and two subnets

In this section, you will perform the following tasks:

  1. In your host project, create a network namedshared-net.
  2. Create two subnets namedtier-1 andtier-2.
  3. For each subnet, create two secondary address ranges: one for Services, andone for Pods.
Note: The secondary address ranges for Pods, Services and nodes must not overlap172.17.0.0/16 and must followthe defaults and limits for range sizes.

Console

  1. Go to theVPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the project picker, select your host project.

  3. ClickCreate VPC Network.

  4. ForName, entershared-net.

  5. UnderSubnet creation mode, selectCustom.

Addtier-1

  1. UnderSubnets in theNew subnet box, forName,entertier-1.
  2. ForRegion, select a region.
  3. UnderIP stack type, selectIPv4 (single-stack).
  4. ForIPv4 range, enter10.0.4.0/22 as theprimary address range for the subnet.
  5. ClickAdd a secondary IPv4 range.

    • ForSubnet range name, entertier-1-services.
    • ForSecondary IPv4 range, enter10.0.32.0/20.
  6. ClickDone.

  7. ClickAdd a secondary IPv4 range.

    • ForSubnet range name, entertier-1-pods.
    • ForSecondary IPv4 range, enter10.4.0.0/14.
  8. ClickDone.

Addtier-2

  1. ClickAdd subnet.
  2. ForName, entertier-2.
  3. ForRegion, select the same region that you selected for the previoussubnet.
  4. ForIPv4 range, enter172.16.4.0/22 as the primary address rangefor the subnet.
  5. ClickAdd a secondary IPv4 range.

    • ForSubnet range name, entertier-2-services.
    • ForSecondary IPv4 range, enter172.16.16.0/20.
  6. ClickDone.

  7. ClickAdd a secondary IPv4 range.

    • ForSubnet range name, entertier-2-pods.
    • ForSecondary IPv4 range, enter172.20.0.0/14.
  8. ClickDone.

  9. Go to the bottom of the page and clickCreate.

gcloud

In your host project, create a network namedshared-net:

gcloudcomputenetworkscreateshared-net\--subnet-modecustom\--projectHOST_PROJECT_ID

In your new network, create a subnet namedtier-1:

gcloudcomputenetworkssubnetscreatetier-1\--projectHOST_PROJECT_ID\--networkshared-net\--range10.0.4.0/22\--regionCOMPUTE_REGION\--secondary-rangetier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14

Create another subnet namedtier-2:

gcloudcomputenetworkssubnetscreatetier-2\--projectHOST_PROJECT_ID\--networkshared-net\--range172.16.4.0/22\--regionCOMPUTE_REGION\--secondary-rangetier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14

ReplaceCOMPUTE_REGION with aCompute Engine region.

Determine the names of service accounts in your service projects

You have two service projects, each of which has severalservice accounts.This section is concerned with your GKE service accounts andyourGoogle APIs service accounts.You need the names of these service accounts for the next section.

The following table lists the names of the GKE and GoogleAPIs service accounts in your two service projects:

Service account typeService account name
GKEservice-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com
Google APIsSERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com

Enable Shared VPC and grant roles

To perform the tasks in this section, you must be aShared VPC Admin. If you aren't aShared VPC Admin, ask someone who is an Organization Admin to grant youthe Compute Shared VPC Admin (compute.xpnAdmin) and Project IAM Admin(resourcemanager.projectIamAdmin) roles for the organization or one or morefolders.

In this section, you will perform the following tasks:

  1. In your host project, enable Shared VPC.
  2. Attach your two service projects to the host project.
  3. Either remove or grant theCompute Network User role to the serviceaccounts that belong to your service projects. If you are using theGoogle Cloud console, you remove the role; and if you are using thegcloud CLI, you grant the role.

Console

  1. Go to theShared VPC page in the Google Cloud console.

    Go to Shared VPC

  2. In the project picker, select your host project.

  3. ClickSet up Shared VPC. TheEnable host project screenis displayed.

  4. ClickEnable & continue. TheAttach service projects and selectprincipals section is displayed.

  5. UnderKubernetes Engine access, selectEnabled.

  6. UnderSelect projects to attach, clickadd_boxAdd item.

  7. In theSelect project field, clickSelect and choose yourfirst service project.

  8. ClickAdditem again, and select your second service project.

  9. ClickContinue. TheGrant access section is displayed.

  10. UnderAccess mode, selectIndividual subnet access.

  11. UnderSubnets with individual subnet access, scroll throughthe list and selecttier-1 andtier-2.

  12. ClickSave. A new page is displayed.

  13. SelectOnly show subnets that have individual IAMpolicies.

Remove service accounts fromtier-1

  1. UnderIndividual subnet access, selecttier-1 and clickShow Permissions Panel.
  2. In thePermissions Panel underRole/Principal, expandCompute Network User.
  3. Search forSERVICE_PROJECT_2_NUM.
  4. Delete all the service accounts that belong to yoursecond serviceproject. That is, delete the service accounts that containSERVICE_PROJECT_2_NUM.
  5. Confirm that the following service accounts for yourfirst serviceproject are in the list with theCompute Network User role:

    • service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com
    • SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com
    • SERVICE_PROJECT_1_NUM-compute@developer.iam.gserviceaccount.com

    If a service account isn't in the list, do the following to add it:

    1. ClickAddPrincipal.
    2. In theNew principals field, enter the service account name.
    3. UnderAssign Roles, selectCompute Network User.
    4. ClickSave.
  6. UnderIndividual subnet access, clear thetier-1 checkbox.

Remove service accounts fromtier-2

  1. UnderIndividual subnet access, selecttier-2.
  2. In thePermissions Panel underRole/Principal,expandCompute Network User.
  3. Search forSERVICE_PROJECT_1_NUM.
  4. Delete all the service accounts that belong to yourfirst serviceproject. That is, delete any service accounts that containSERVICE_PROJECT_1_NUM.
  5. Confirm that the following service accounts for yoursecond serviceproject are in the list with theCompute Network User role:

    • service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com
    • SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com
    • SERVICE_PROJECT_2_NUM-compute@developer.iam.gserviceaccount.com

    If a service account isn't in the list, do the following to add it:

    1. ClickAddPrincipal.
    2. Enter the service account name in theNew principals field.
    3. UnderAssign Roles, selectCompute Network User.
    4. ClickSave.

gcloud

  1. Enable Shared VPC in your host project. The command that youuse depends on therequired administrative rolethat you have.

    If you have Shared VPC Admin role at the organizational level:

    gcloudcomputeshared-vpcenableHOST_PROJECT_ID

    If you have Shared VPC Admin role at the folder level:

    gcloudbetacomputeshared-vpcenableHOST_PROJECT_ID
  2. Attach your first service project to your host project:

    gcloudcomputeshared-vpcassociated-projectsaddSERVICE_PROJECT_1_ID\--host-projectHOST_PROJECT_ID
  3. Attach your second service project to your host project:

    gcloudcomputeshared-vpcassociated-projectsaddSERVICE_PROJECT_2_ID\--host-projectHOST_PROJECT_ID
  4. Get the IAM policy for thetier-1 subnet:

    gcloudcomputenetworkssubnetsget-iam-policytier-1\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGION

    The output contains anetag field. Make a note of theetag value.

  5. Create a file namedtier-1-policy.yaml that has the following content:

    bindings:-members:-serviceAccount:SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com-serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comrole:roles/compute.networkUseretag:ETAG_STRING

    ReplaceETAG_STRING with theetag value thatyou noted previously.

  6. Set the IAM policy for thetier-1 subnet:

    gcloudcomputenetworkssubnetsset-iam-policytier-1\tier-1-policy.yaml\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGION
  7. Get the IAM policy for thetier-2 subnet:

    gcloudcomputenetworkssubnetsget-iam-policytier-2\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGION

    The output contains anetag field. Make a note of theetag value.

  8. Create a file namedtier-2-policy.yaml that has the following content:

    bindings:-members:-serviceAccount:SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com-serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comrole:roles/compute.networkUseretag:ETAG_STRING

    ReplaceETAG_STRING with theetag value thatyou noted previously.

  9. Set the IAM policy for thetier-2 subnet:

    gcloudcomputenetworkssubnetsset-iam-policytier-2\tier-2-policy.yaml\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGION

Manage firewall resources

For a GKE cluster in a service project to create andmanage the firewall resources in your host project, the service project'sGKE service account must be granted the appropriateIAM permissions. You can grant these permissions by using one ofthe following strategies:

Note: To follow security best practices, choose the finer grained approach.Granting the service project's GKE service account the ComputeSecurity Admin role will allow it more IAM permissions than isnecessary for the purposes of this guide.

Console

  1. In the Google Cloud console, go to theIAM page.

    Goto IAM

  2. Select the host project.

  3. ClickGrant access, then enterthe service project's GKE service account principal,service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.

  4. Select theCompute Security Admin role from the drop-down list.

  5. ClickSave.

gcloud

Grant the service project's GKE service account theComputeSecurity Admin role within the host project:

gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com\--role=roles/compute.securityAdmin

Replace the following:

  • HOST_PROJECT_ID: the shared VPChost project ID
  • SERVICE_PROJECT_NUM: the ID of the serviceproject containing the GKE service account
  • For a finer grained approach,create a custom IAM rolethat includes only the following permissions:compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update, andcompute.firewalls.delete. Grant the serviceproject's GKE service account that custom role to the host project.

Console

Create a custom role within the host project containing theIAM permissions mentioned earlier:

  1. In the Google Cloud console, go to theRoles page.

    Go to the Roles page

  2. Using the drop-down list at the top of the page, select the hostproject.

  3. ClickCreate Role.

  4. Enter aTitle,Description,ID andRole launch stagefor the role. The role name cannot be changed after the role is created.

  5. ClickAdd Permissions.

  6. Filter forcompute.networks and select the IAMpermissions mentioned previously.

  7. Once all required permissions are selected, clickAdd.

  8. ClickCreate.

Grant the service project's GKE service account the newlycreated custom role within the host project:

  1. In the Google Cloud console, go to theIAM page.

    Goto IAM

  2. Select the host project.

  3. ClickGrant access, thenenter the service project's GKE service account principal,service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.

  4. Filter for theTitle of the newly created custom role and select it.

  5. ClickSave.

gcloud

  1. Create a custom role within the host project containing theIAM permissions mentioned earlier:

    gcloudiamrolescreateROLE_ID\--title="ROLE_TITLE"\--description="ROLE_DESCRIPTION"\--stage=LAUNCH_STAGE\--permissions=compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update,compute.firewalls.delete\--project=HOST_PROJECT_ID
  2. Grant the service project's GKE service account the newlycreated custom role within the host project:

    gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com\--role=projects/HOST_PROJECT_ID/roles/ROLE_ID

    Replace the following:

    • ROLE_ID: the name of the role, such asgkeFirewallAdmin
    • ROLE_TITLE: a friendly title for the role, suchasGKE Firewall Admin
    • ROLE_DESCRIPTION: a short description of therole, such asGKE service account FW permissions
    • LAUNCH_STAGE: thelaunch stageof the role in its lifecycle, such asALPHA,BETA, orGA
    • HOST_PROJECT_ID: the shared VPChost project ID
    • SERVICE_PROJECT_NUM: the ID of the serviceproject containing the GKE service account

If you have clusters in more than one service project, you must choose one of thestrategies and repeat it for each service project's GKE serviceaccount.

Note: If you are using Ingress for internal Application Load Balancers, the Ingresscontroller does not create a firewall rule to allowconnections from the load balancer proxies in the proxy-subnet. You must createthis firewall rule manually. However, the Ingress controller does createfirewall rules to allow ingress for Google Cloud health-checks.

Summary of roles granted on subnets

Here's a summary of the roles granted on the subnets:

Service accountRoleSubnet
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comCompute Network Usertier-1
SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.comCompute Network Usertier-1
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comCompute Network Usertier-2
SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.comCompute Network Usertier-2

Verify GKE access

When attaching a service project, enabling GKE access grants theservice project's GKE service account the permissions toperform network management operations in the host project.

GKE assigns the following role automatically in the hostproject when enabling GKE access:

MemberRoleResource
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.comHost Service Agent UserGKE service account in the host project

However, you must add theCompute Network User IAM permissionmanually to the GKE service account of the service project toaccess the host network.

MemberRoleResource
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.comCompute Network UserSpecific subnet or whole host project

If a service project was attached without enabling GKE access,assuming the GKE API has already been enabled in both the hostand service project, you can manually assign the permissions to the serviceproject's GKE service account by adding the followingIAM role bindings in the host project:

MemberRoleResource
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.comCompute Network UserSpecific subnet or whole host project
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.comHost Service Agent UserGKE service agent in the host project

Grant the Host Service Agent User role

Each service project's GKE service agent must have abinding for theHost Service Agent User(roles/container.hostServiceAgentUser) role on the host project. TheGKE service agent takes the following form:

service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com

WhereSERVICE_PROJECT_NUM is the project number ofyour service project.

This binding allows the service project's GKE service agentto perform network management operations in the host project, as if it were thehost project's GKE service agent. This role can only begranted to a service project's GKE service agent.

Console

If you have been using the Google Cloud console, you do not have to grantthe Host Service Agent User role explicitly. That was done automaticallywhen you used the Google Cloud console to attach service projects toyour host project.

gcloud

  1. For your first project, grant the Host Service Agent User role to theproject's GKE service agent. This role is granted onyour host project:

    gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser
  2. For your second project, grant the Host Service Agent User role to theproject's GKE service agent. This role is granted onyour host project:

    gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser

Verify usable subnets and secondary IP address ranges

When creating a cluster, you must specify a subnet and the secondary IPaddress ranges to be used for the cluster's Pods and Services. There are severalreasons that an IP address range might not be available for use. Whether you arecreating the cluster with the Google Cloud console or thegcloud CLI, you should specify usable IP address ranges.

An IP address range is usable for the new cluster'sServices if the range isn't alreadyin use. The IP address range that you specify for the new cluster's Pods caneither be an unused range, or it can be a range that's shared with Pods in yourother clusters. IP address ranges that are created and managed byGKE can't be used by your cluster.

You can list a project's usable subnets and secondary IP address ranges by usingthe gcloud CLI.

gcloud

gcloudcontainersubnetslist-usable\--projectSERVICE_PROJECT_ID\--network-projectHOST_PROJECT_ID

ReplaceSERVICE_PROJECT_ID with the project ID ofthe service project.

If you omit the--project or--network-project option, thegcloud CLI command uses the default project from youractive configuration. Because the hostproject and network project are distinct, you must specify one or both of--project and--network-project.

The output is similar to the following:

PROJECT               REGION    NETWORK     SUBNET  RANGEexample-host-project  us-west1  shared-net  tier-1  10.0.4.0/22┌──────────────────────┬───────────────┬─────────────────────────────┐│ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │            STATUS           │├──────────────────────┼───────────────┼─────────────────────────────┤│ tier-1-services      │ 10.0.32.0/20  │ usable for pods or services ││ tier-1-pods          │ 10.4.0.0/14   │ usable for pods or services │└──────────────────────┴───────────────┴─────────────────────────────┘example-host-project  us-west1  shared-net  tier-2  172.16.4.0/22┌──────────────────────┬────────────────┬─────────────────────────────┐│ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE  │            STATUS           │├──────────────────────┼────────────────┼─────────────────────────────┤│ tier-2-services      │ 172.16.16.0/20 │ usable for pods or services ││ tier-2-pods          │ 172.20.0.0/14  │ usable for pods or services │└──────────────────────┴────────────────┴─────────────────────────────┘

Thelist-usable command returns an empty list in the following situations:

  • When the service project's GKE service account does nothave the Host Service Agent User role to the host project.
  • When the GKE service account in the host project does notexist (for example, if you've deleted that account accidentally).
  • When the GKE API is not enabled in the host project,which implies the GKE service account in the host project ismissing.

For more information, see thetroubleshooting section.

Secondary IP address range limits

You can create 30 secondary ranges in a given subnet. For each cluster, youneed two secondary ranges: one for Pods and one for Services.

Note: The primary range and the Pod secondary range can be shared betweenclusters, but this is not a recommended configuration.

Create a cluster in your first service project

To create a cluster in your first service project, perform the following stepsusing the gcloud CLI or the Google Cloud console.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. ClickCreate.

  4. In the Autopilot or Standard section, clickConfigure.

  5. ForName, entertier-1-cluster.

  6. In theRegion drop-down list, select the same region that you usedfor the subnets.

  7. From the navigation pane, clickNetworking.

  8. UnderCluster networking, clickNetworks shared with me.

  9. In theNetwork field, selectshared-net.

  10. ForNode subnet, selecttier-1.

  11. UnderAdvanced networking options, forPod secondary CIDR range,selecttier-1-pods.

  12. ForServices secondary CIDR range, selecttier-1-services.

  13. ClickCreate.

If you created a Standard cluster, you can see that the nodesof the cluster are in the primary address range of the tier-1 subnetby doing the following:

  1. When the creation is complete, theCluster details page is displayed.
  2. Click theNodes tab.
  3. UnderNode Pools, clickdefault-pool.
  4. UnderInstance groups, click the name of the instance group you wantto inspect. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.
  5. In the list of instances, verify that the internal IP addresses of yournodes are in the primary range of the tier-1 subnet:10.0.4.0/22.

gcloud

Create a cluster namedtier-1-cluster in your first service project:

gcloudcontainerclusterscreate-autotier-1-cluster\--project=SERVICE_PROJECT_1_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT_ID/global/networks/shared-net\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-1\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services

Replace theCONTROL_PLANE_LOCATION with the Compute Engineregion of the control plane of yourcluster.

When the creation is complete, verify that your cluster nodes arein the primary range of the tier-1 subnet:10.0.4.0/22.

gcloudcomputeinstanceslist--projectSERVICE_PROJECT_1_ID

The output shows the internal IP addresses of the nodes:

NAMEZONE...INTERNAL_IPgke-tier-1-cluster-...ZONE_NAME...10.0.4.2gke-tier-1-cluster-...ZONE_NAME...10.0.4.3gke-tier-1-cluster-...ZONE_NAME...10.0.4.4

Create a cluster in your second service project

To create a cluster in your second service project, perform the following stepsusing the gcloud CLI or the Google Cloud console.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your second service project.

  3. ClickCreate.

  4. In the Standard or Autopilot section, clickConfigure.

  5. ForName, entertier-2-cluster.

  6. In theRegion drop-down list, select the same region that you usedfor the subnets.

  7. From the navigation pane, clickNetworking.

  8. UnderCluster networking, clickNetworks shared with me.

  9. In theNetwork field, selectshared-net.

  10. ForNode subnet, selecttier-2.

  11. UnderAdvanced networking options, forPod secondary CIDR range,selecttier-2-pods.

  12. ForService secondary CIDR range, selecttier-2-services.

  13. ClickCreate.

If you created a Standard cluster, you can see that the nodesof the cluster are in the primary address range of the tier-2 subnetby doing the following:

  1. When the creation is complete, theCluster details page is displayed.
  2. Click theNodes tab.
  3. UnderNode Pools, clickdefault-pool.
  4. UnderInstance groups, click the name of the instance group you wantto inspect. For example,gke-tier-2-cluster-default-pool-5c5add1f-grp.
  5. In the list of instances, verify that the internal IP addresses of yournodes are in the primary range of the tier-2 subnet:172.16.4.0/22.

gcloud

Create a cluster namedtier-2-cluster in your second service project:

gcloudcontainerclusterscreate-autotier-2-cluster\--project=SERVICE_PROJECT_2_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT_ID/global/networks/shared-net\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-2\--cluster-secondary-range-name=tier-2-pods\--services-secondary-range-name=tier-2-services

When the creation is complete, verify that your cluster nodes arein the primary range of the tier-2 subnet:172.16.4.0/22.

gcloudcomputeinstanceslist--projectSERVICE_PROJECT_2_ID

The output shows the internal IP addresses of the nodes:

NAMEZONE...INTERNAL_IPgke-tier-2-cluster-...ZONE_NAME...172.16.4.2gke-tier-2-cluster-...ZONE_NAME...172.16.4.3gke-tier-2-cluster-...ZONE_NAME...172.16.4.4
Note: During GKE cluster upgrades, GKEautomatically adds project-level metadata entries likegoogle_compute_project_metadata to track secondary IP address range usage. Fordetails and guidance on managing this metadata with Infrastructure as Code (IaC)tools, seeProject metadata for secondary IP addressranges.

Create firewall rules

To allow traffic into the network and between the clusters within the network,you need to createfirewalls.The following sections demonstrate how to createand update firewall rules:

Note: Shared VPC Admins are responsible for creating firewall rules in theShared VPC network.

Create a firewall rule to enable SSH connection to a node

In your host project, create a firewall rule for theshared-net network.Allow traffic to enter on TCP port22, which permits you to connect toyour cluster nodes using SSH.

Console

  1. Go to theFirewall page in the Google Cloud console.

    Go to Firewall

  2. In the project picker, select your host project.

  3. From theVPC Networking menu, clickCreate Firewall Rule.

  4. ForName, entermy-shared-net-rule.

  5. ForNetwork, selectshared-net.

  6. ForDirection of traffic, selectIngress.

  7. ForAction on match, selectAllow.

  8. ForTargets, selectAll instances in the network.

  9. ForSource filter, selectIP ranges.

  10. ForSource IP ranges, enter0.0.0.0/0.

  11. ForProtocols and ports, selectSpecified protocols and ports.In the box, entertcp:22.

  12. ClickCreate.

gcloud

Create a firewall rule for your shared network:

gcloudcomputefirewall-rulescreatemy-shared-net-rule\--projectHOST_PROJECT_ID\--networkshared-net\--directionINGRESS\--allowtcp:22

Connect to a node by using SSH

After creating the firewall that allows ingress traffic on TCP port22,connect to the node using SSH.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. Clicktier-1-cluster.

  4. On theCluster details page, click theNodes tab.

  5. UnderNode Pools, click the name of your node pool.

  6. UnderInstance groups, click the name of your instance group. Forexample, gke-tier-1-cluster-default-pool-faf87d48-grp.

  7. In the list of instances, make a note of the internal IP addresses of thenodes. These addresses are in the10.0.4.0/22 range.

  8. For one of your nodes, clickSSH. This succeeds because SSH usesTCP port22, which is allowed by your firewall rule.

gcloud

List the nodes in your first service project:

gcloudcomputeinstanceslist--projectSERVICE_PROJECT_1_ID

The output includes the names of the nodes in your cluster:

NAME...gke-tier-1-cluster-default-pool-faf87d48-3mf8...gke-tier-1-cluster-default-pool-faf87d48-q17k...gke-tier-1-cluster-default-pool-faf87d48-x9rk...

Connect to one of your nodes using SSH:

gcloudcomputesshNODE_NAME\--projectSERVICE_PROJECT_1_ID\--zoneCOMPUTE_ZONE

Replace the following:

  • NODE_NAME: the name of one of your nodes.
  • COMPUTE_ZONE: the name of aCompute Engine zone within theregion.

Update the firewall rule to allow traffic between nodes

  1. In your SSH command-line window, start theCoreOS Toolbox:

    /usr/bin/toolbox
  2. In the toolbox shell, ping one of your other nodes in the same cluster. Forexample:

    ping10.0.4.4

    Theping command succeeds, because your node and the other node are bothin the10.0.4.0/22 range.

  3. Now, try to ping one of the nodes in the cluster in your other serviceproject. For example:

    ping172.16.4.3

    This time theping command fails, because your firewall rule does notallow Internet Control Message Protocol (ICMP) traffic.

  4. At an ordinary command prompt, not your toolbox shell, Update your firewallrule to allow ICMP:

    gcloudcomputefirewall-rulesupdatemy-shared-net-rule\--projectHOST_PROJECT_ID\--allowtcp:22,icmp
  5. In your toolbox shell, ping the node again. For example:

    ping172.16.4.3

    This time theping command succeeds.

Create additional firewall rules

You can create additional firewall rules to allow communication between nodes,Pods, and Services in your clusters.

For example, the following rule allows traffic to enter from any node, Pod, orService intier-1-cluster on any TCP or UDP port:

gcloudcomputefirewall-rulescreatemy-shared-net-rule-2\--projectHOST_PROJECT_ID\--networkshared-net\--allowtcp,udp\--directionINGRESS\--source-ranges10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

The following rule allows traffic to enter from any node, Pod, or Service intier-2-cluster on any TCP or UDP port:

gcloudcomputefirewall-rulescreatemy-shared-net-rule-3\--projectHOST_PROJECT_ID\--networkshared-net\--allowtcp,udp\--directionINGRESS\--source-ranges172.16.4.0/22,172.20.0.0/14,172.16.16.0/20

Kubernetes will also try to create and manage firewall resources when necessary,for example when you create a load balancer service. If Kubernetes finds itselfunable to change the firewall rules due to a permission issue, a KubernetesEvent will be raised to guide you on how to make the changes.

If you want to grant Kubernetes permission to change the firewall rules, seeManage firewall resources.

For Ingress Load Balancers, if Kubernetes can't change the firewall rules due toinsufficient permission, afirewallXPNError event is emitted every severalminutes. InGLBC 1.4and later, you can mute thefirewallXPNError event byaddingnetworking.gke.io/suppress-firewall-xpn-error: "true" annotation to theingress resource. You can always remove this annotation to unmute.

Create a cluster based on VPC Network Peering in a Shared VPC

You can use Shared VPC with clusters based on VPC Network Peering.

This requires that you grant the following permissions on the host project, either tothe user account or to the service account, used to create the cluster:

  • compute.networks.get

  • compute.networks.updatePeering

You must also ensure that the control plane IP address range does not overlapwith other reserved ranges in the shared network.

In this section, you create a VPC-native cluster namedcluster-vpc in a predefined shared VPC network.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. ClickCreate.

  3. In the Autopilot or Standard section, clickConfigure.

  4. ForName, entercluster-vpc.

  5. From the navigation pane, clickNetworking.

  6. In theCluster networking section, select theEnable Private nodes checkbox.

  7. (Optional for Autopilot): SetControl plane IP range to172.16.0.16/28.

  8. In theNetwork drop-down list, select the VPC network you createdpreviously.

  9. In theNode subnet drop-down list, select the shared subnet youcreated previously.

  10. Configure your cluster as needed.

  11. ClickCreate.

gcloud

Run the following command to create a cluster namedcluster-vpc ina predefined Shared VPC:

gcloudcontainerclusterscreate-autoprivate-cluster-vpc\--project=PROJECT_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT/global/networks/shared-net\--subnetwork=SHARED_SUBNETWORK\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services\--enable-private-nodes\--master-ipv4-cidr=172.16.0.0/28

Reserve IP addresses

You can reserveinternalandexternal IP addressesfor your Shared VPC clusters. Ensure that the IP addresses are reserved in theservice project.

For internal IP addresses, you must provide the subnetwork where the IP addressbelongs. To reserve an IP address across projects, use the full resource URL toidentify the subnetwork.

You can use the following command in the Google Cloud CLI to reserve an internalIP address:

gcloudcomputeaddressescreateRESERVED_IP_NAME\--region=COMPUTE_REGION\--subnet=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNETWORK_NAME\--addresses=IP_ADDRESS\--project=SERVICE_PROJECT_ID

To call this command, you must have thecompute.subnetworks.usepermission added to the subnetwork. You can eithergrant the caller acompute.networkUser role on the subnetwork,or you cangrant the caller a customized role withcompute.subnetworks.use permission at the project level.

Clean up

After completing the exercises in this guide, perform the following tasks toremove the resources to prevent unwanted charges incurring on your account:

Delete the clusters

Delete the two clusters you created.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the project picker, select your first service project.

  3. Select thetier-1-cluster, and clickDelete.

  4. In the project picker, select your second service project.

  5. Select thetier-2-cluster, and clickDelete.

gcloud

gcloudcontainerclustersdeletetier-1-cluster\--projectSERVICE_PROJECT_1_ID\--locationCONTROL_PLANE_LOCATIONgcloudcontainerclustersdeletetier-2-cluster\--projectSERVICE_PROJECT_2_ID\--locationCONTROL_PLANE_LOCATION

Disable Shared VPC

Disable Shared VPC in your host project.

Console

  1. Go to theShared VPC page in the Google Cloud console.

    Go to Shared VPC

  2. In the project picker, select your host project.

  3. ClickDisable Shared VPC.

  4. Enter theHOST_PROJECT_ID in the field, andclickDisable.

gcloud

gcloudcomputeshared-vpcassociated-projectsremoveSERVICE_PROJECT_1_ID\--host-projectHOST_PROJECT_IDgcloudcomputeshared-vpcassociated-projectsremoveSERVICE_PROJECT_2_ID\--host-projectHOST_PROJECT_IDgcloudcomputeshared-vpcdisableHOST_PROJECT_ID

Delete your firewall rules

Remove the firewall rules you created.

Console

  1. Go to theFirewall page in the Google Cloud console.

    Go to Firewall

  2. In the project picker, select your host project.

  3. In the list of rules, selectmy-shared-net-rule,my-shared-net-rule-2, andmy-shared-net-rule-3.

  4. ClickDelete.

gcloud

Delete your firewall rules:

gcloudcomputefirewall-rulesdelete\my-shared-net-rule\my-shared-net-rule-2\my-shared-net-rule-3\--projectHOST_PROJECT_ID

Delete the shared network

Delete the shared network you created.

Console

  1. Go to theVPC networks page in the Google Cloud console.

    Go to VPC networks

  2. In the project picker, select your host project.

  3. In the list of networks, click theshared-net link.

  4. ClickDelete VPC Network.

gcloud

gcloudcomputenetworkssubnetsdeletetier-1\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONgcloudcomputenetworkssubnetsdeletetier-2\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONgcloudcomputenetworksdeleteshared-net--projectHOST_PROJECT_ID

Remove the Host Service Agent User role

Remove the Host Service Agent User roles from your two service projects.

Console

  1. Go to theIAM page in the Google Cloud console.

    Go to IAM

  2. In the project picker, select your host project.

  3. SelectInclude Google-provided role grants.

  4. In the list of members, select the row that showsservice-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comis granted the Kubernetes Engine Host Service Agent User role.

  5. ClickEditprincipal.

  6. UnderKubernetes Engine Host Service Agent User, click the icon to removethe role.

  7. ClickSave.

  8. Select the row that showsservice-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comis granted the Kubernetes Engine Host Service Agent User role.

  9. ClickEditprincipal.

  10. UnderKubernetes Engine Host Service Agent User, click the icon to removethe role.

  11. ClickSave.

gcloud

  1. Remove the Host Service Agent User role from the GKEservice account of your first service project:

    gcloudprojectsremove-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser
  2. Remove the Host Service Agent User role from the GKEservice account of your second service project:

    gcloudprojectsremove-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser

Troubleshooting

The following sections help you to resolve common issues with Shared VPCclusters.

Error: Failed to get metadata from network project

The following message is a common error when working with Shared VPCclusters:

Failed to get metadata from network project. GCE_PERMISSION_DENIED: GoogleCompute Engine: Required 'compute.projects.get' permission for'projects/HOST_PROJECT_ID

This error can occur for the following reasons:

  • The GKE API has not been enabled in the host project.

  • The host project's GKE service account does not exist. Forexample, it might have been deleted.

  • The host project's GKE service account does not have theKubernetes Engine Service Agent (container.serviceAgent) role in thehost project. The binding might have been accidentally removed.

  • The service project's GKE service account does not have the HostService Agent User role in the host project.

To resolve the issue, determine whether the host project's GKEservice account exists.

If the service account doesn't exist, do the following:

Issue: Connectivity

If you're experiencing connectivity issues between Compute Engine VMsthat are in the same Virtual Private Cloud (VPC) network or twoVPC networks connected with VPC Network Peering, refer toTroubleshooting connectivity between virtual machine (VM) instances withinternal IP addresses in theVirtual Private Cloud (VPC) documentation.

Issue: Packet loss

If you're experiencing issues with packet loss when sending traffic from acluster to an external IP address usingCloud NAT,VPC-native clusters, orIP masquerade agent,seeTroubleshoot Cloud NAT packet loss from a cluster.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.