Configure clusters with Shared VPC Stay organized with collections Save and categorize content based on your preferences.
This guide shows how to create two Google Kubernetes Engine (GKE) clusters, inseparate projects, that use aShared VPC. For generalinformation about GKE networking, visit theNetwork overview.
The examples in this guide configure the infrastructure for a two-tier webapplication, as described inShared VPC overview.
Why use Shared VPC with GKE
WithShared VPC, you designate one project asthe host project, and you can attach other projects, called service projects, tothe host project. You create networks, subnets, secondary address ranges,firewall rules, and other network resources in the host project. Then you shareselected subnets, including secondary ranges, with the service projects.Components running in a service project can use the Shared VPC tocommunicate with components running in the other service projects.
You can use Shared VPC withAutopilot clustersand withzonal andregionalStandard clusters.
Standard clusters that use Shared VPC cannot uselegacy networks and must haveVPC-native traffic routingenabled. Autopilot clusters always enable VPC-nativetraffic routing.
You can configure Shared VPC when you create a new cluster.GKE does not support converting existing clusters to theShared VPC model.
With Shared VPC, certain quotas and limits apply. For example there is a quotafor the number of networks in a project, and there is a limit on the number ofservice projects that can be attached to a host project. For details, seeQuotas and limits.
Before you begin
Before you start to set up a cluster with Shared VPC:
- Ensure you have aGoogle Cloud organization.
- Ensure your organization has threeGoogle Cloud projects.
- Ensure that you're familiar with theShared VPC conceptsincluding the variousIdentity and Access Management (IAM) rolesused by Shared VPC. The tasks in this guide need to be performed by aShared VPC Admin.
- Ensure that you're familiar withany organization policy constraints applicable toyour organization, folder, or projects. An Organization Policy Administratormight have defined constraints that limit which projects can be Shared VPC hostprojects or that limit which subnets can be shared. Refer toorganization policy constraintsfor more information.
Before you perform the exercises in this guide:
- Choose one of your projects to be the host project.
- Choose two of your projects to be service projects.
Each project has a name, an ID, and a number. In some cases, the name and the IDare the same. This guide uses the following friendly names and placeholders torefer to your projects:
| Friendly name | Project ID placeholder | Project number placeholder |
|---|---|---|
| Your host project | HOST_PROJECT_ID | HOST_PROJECT_NUM |
| Your first service project | SERVICE_PROJECT_1_ID | SERVICE_PROJECT_1_NUM |
| Your second service project | SERVICE_PROJECT_2_ID | SERVICE_PROJECT_2_NUM |
Find your project IDs and numbers
You can find your project ID and numbers by using the gcloud CLI orthe Google Cloud console.
Console
Go to theHome page of the Google Cloud console.
In the project picker, select the project that you have chosen to bethe host project.
UnderProject info, you can see the project name, project ID,and project number. Make a note of the ID and number for later.
Do the same for each of the projects that you have chosen to beservice projects.
gcloud
List your projects with the following command:
gcloudprojectslistThe output shows your project names, IDs and numbers. Make a note of the IDand number for later:
PROJECT_ID NAME PROJECT_NUMBERhost-123 host 1027xxxxxxxxsrv-1-456 srv-1 4964xxxxxxxxsrv-2-789 srv-2 4559xxxxxxxxEnable the GKE API in your projects
Before you continue with the exercises in this guide, make sure that theGKE API is enabled in all three of your projects. Enabling the APIin a project creates a GKE service account for the project.To perform the remaining tasks in this guide, each of your projects must have aGKE service account.
You can enable the GKE API using the Google Cloud consoleor the Google Cloud CLI.
Console
Go to theAPIs & Services page in the Google Cloud console.
In the project picker, select the project that you have chosen tobe the host project.
If
Kubernetes Engine APIis in the list of APIs, it isalready enabled, and you don't need to do anything. If it is not in thelist, clickEnable APIs and Services. Search forKubernetes Engine API. Click theKubernetes Engine API card, andclickEnable.Repeat these steps for each projects that you have chosen to be aservice project. Each operation may take some time to complete.
gcloud
Enable the GKE API for your three projects. Each operationmay take some time to complete:
gcloudservicesenablecontainer.googleapis.com--projectHOST_PROJECT_IDgcloudservicesenablecontainer.googleapis.com--projectSERVICE_PROJECT_1_IDgcloudservicesenablecontainer.googleapis.com--projectSERVICE_PROJECT_2_IDCreate a network and two subnets
In this section, you will perform the following tasks:
- In your host project, create a network named
shared-net. - Create two subnets named
tier-1andtier-2. - For each subnet, create two secondary address ranges: one for Services, andone for Pods.
172.17.0.0/16 and must followthe defaults and limits for range sizes.Console
Go to theVPC networks page in the Google Cloud console.
In the project picker, select your host project.
Clickadd_boxCreate VPC Network.
ForName, enter
shared-net.UnderSubnet creation mode, selectCustom.
Addtier-1
- UnderSubnets in theNew subnet box, forName,enter
tier-1. - ForRegion, select a region.
- UnderIP stack type, selectIPv4 (single-stack).
- ForIPv4 range, enter
10.0.4.0/22as theprimary address range for the subnet. ClickAdd a secondary IPv4 range.
- ForSubnet range name, enter
tier-1-services. - ForSecondary IPv4 range, enter
10.0.32.0/20.
- ForSubnet range name, enter
ClickDone.
ClickAdd a secondary IPv4 range.
- ForSubnet range name, enter
tier-1-pods. - ForSecondary IPv4 range, enter
10.4.0.0/14.
- ForSubnet range name, enter
ClickDone.
Addtier-2
- ClickAdd subnet.
- ForName, enter
tier-2. - ForRegion, select the same region that you selected for the previoussubnet.
- ForIPv4 range, enter
172.16.4.0/22as the primary address rangefor the subnet. ClickAdd a secondary IPv4 range.
- ForSubnet range name, enter
tier-2-services. - ForSecondary IPv4 range, enter
172.16.16.0/20.
- ForSubnet range name, enter
ClickDone.
ClickAdd a secondary IPv4 range.
- ForSubnet range name, enter
tier-2-pods. - ForSecondary IPv4 range, enter
172.20.0.0/14.
- ForSubnet range name, enter
ClickDone.
Go to the bottom of the page and clickCreate.
gcloud
In your host project, create a network namedshared-net:
gcloudcomputenetworkscreateshared-net\--subnet-modecustom\--projectHOST_PROJECT_IDIn your new network, create a subnet namedtier-1:
gcloudcomputenetworkssubnetscreatetier-1\--projectHOST_PROJECT_ID\--networkshared-net\--range10.0.4.0/22\--regionCOMPUTE_REGION\--secondary-rangetier-1-services=10.0.32.0/20,tier-1-pods=10.4.0.0/14Create another subnet namedtier-2:
gcloudcomputenetworkssubnetscreatetier-2\--projectHOST_PROJECT_ID\--networkshared-net\--range172.16.4.0/22\--regionCOMPUTE_REGION\--secondary-rangetier-2-services=172.16.16.0/20,tier-2-pods=172.20.0.0/14ReplaceCOMPUTE_REGION with aCompute Engine region.
Determine the names of service accounts in your service projects
You have two service projects, each of which has severalservice accounts.This section is concerned with your GKE service accounts andyourGoogle APIs service accounts.You need the names of these service accounts for the next section.
The following table lists the names of the GKE and GoogleAPIs service accounts in your two service projects:
| Service account type | Service account name |
|---|---|
| GKE | service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com |
| service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com | |
| Google APIs | SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com |
| SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com |
Enable Shared VPC and grant roles
To perform the tasks in this section, you must be aShared VPC Admin. If you aren't aShared VPC Admin, ask someone who is an Organization Admin to grant youthe Compute Shared VPC Admin (compute.xpnAdmin) and Project IAM Admin(resourcemanager.projectIamAdmin) roles for the organization or one or morefolders.
In this section, you will perform the following tasks:
- In your host project, enable Shared VPC.
- Attach your two service projects to the host project.
- Either remove or grant the
Compute Network Userrole to the serviceaccounts that belong to your service projects. If you are using theGoogle Cloud console, you remove the role; and if you are using thegcloud CLI, you grant the role.
Console
Go to theShared VPC page in the Google Cloud console.
In the project picker, select your host project.
ClickSet up Shared VPC. TheEnable host project screenis displayed.
ClickEnable & continue. TheAttach service projects and selectprincipals section is displayed.
UnderKubernetes Engine access, selectEnabled.
UnderSelect projects to attach, click
add_boxAdd item. In theSelect project field, clickSelect and choose yourfirst service project.
Clickadd_boxAdditem again, and select your second service project.
ClickContinue. TheGrant access section is displayed.
UnderAccess mode, selectIndividual subnet access.
UnderSubnets with individual subnet access, scroll throughthe list and selecttier-1 andtier-2.
ClickSave. A new page is displayed.
SelectOnly show subnets that have individual IAMpolicies.
Remove service accounts fromtier-1
- UnderIndividual subnet access, selecttier-1 and clickShow Permissions Panel.
- In thePermissions Panel underRole/Principal, expandCompute Network User.
- Search for
SERVICE_PROJECT_2_NUM. - Delete all the service accounts that belong to yoursecond serviceproject. That is, delete the service accounts that contain
SERVICE_PROJECT_2_NUM. Confirm that the following service accounts for yourfirst serviceproject are in the list with theCompute Network User role:
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comSERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.comSERVICE_PROJECT_1_NUM-compute@developer.iam.gserviceaccount.com
If a service account isn't in the list, do the following to add it:
- Clickadd_boxAddPrincipal.
- In theNew principals field, enter the service account name.
- UnderAssign Roles, selectCompute Network User.
- ClickSave.
UnderIndividual subnet access, clear thetier-1 checkbox.
Remove service accounts fromtier-2
- UnderIndividual subnet access, selecttier-2.
- In thePermissions Panel underRole/Principal,expandCompute Network User.
- Search for
SERVICE_PROJECT_1_NUM. - Delete all the service accounts that belong to yourfirst serviceproject. That is, delete any service accounts that contain
SERVICE_PROJECT_1_NUM. Confirm that the following service accounts for yoursecond serviceproject are in the list with theCompute Network User role:
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comSERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.comSERVICE_PROJECT_2_NUM-compute@developer.iam.gserviceaccount.com
If a service account isn't in the list, do the following to add it:
- Clickadd_boxAddPrincipal.
- Enter the service account name in theNew principals field.
- UnderAssign Roles, selectCompute Network User.
- ClickSave.
gcloud
Enable Shared VPC in your host project. The command that youuse depends on therequired administrative rolethat you have.
If you have Shared VPC Admin role at the organizational level:
gcloudcomputeshared-vpcenableHOST_PROJECT_IDIf you have Shared VPC Admin role at the folder level:
gcloudbetacomputeshared-vpcenableHOST_PROJECT_IDAttach your first service project to your host project:
gcloudcomputeshared-vpcassociated-projectsaddSERVICE_PROJECT_1_ID\--host-projectHOST_PROJECT_IDAttach your second service project to your host project:
gcloudcomputeshared-vpcassociated-projectsaddSERVICE_PROJECT_2_ID\--host-projectHOST_PROJECT_IDGet the IAM policy for the
tier-1subnet:gcloudcomputenetworkssubnetsget-iam-policytier-1\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONThe output contains an
etagfield. Make a note of theetagvalue.Create a file named
tier-1-policy.yamlthat has the following content:bindings:-members:-serviceAccount:SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com-serviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comrole:roles/compute.networkUseretag:ETAG_STRINGReplace
ETAG_STRINGwith theetagvalue thatyou noted previously.Set the IAM policy for the
tier-1subnet:gcloudcomputenetworkssubnetsset-iam-policytier-1\tier-1-policy.yaml\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONGet the IAM policy for the
tier-2subnet:gcloudcomputenetworkssubnetsget-iam-policytier-2\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONThe output contains an
etagfield. Make a note of theetagvalue.Create a file named
tier-2-policy.yamlthat has the following content:bindings:-members:-serviceAccount:SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com-serviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comrole:roles/compute.networkUseretag:ETAG_STRINGReplace
ETAG_STRINGwith theetagvalue thatyou noted previously.Set the IAM policy for the
tier-2subnet:gcloudcomputenetworkssubnetsset-iam-policytier-2\tier-2-policy.yaml\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGION
Manage firewall resources
For a GKE cluster in a service project to create andmanage the firewall resources in your host project, the service project'sGKE service account must be granted the appropriateIAM permissions. You can grant these permissions by using one ofthe following strategies:
Note: To follow security best practices, choose the finer grained approach.Granting the service project's GKE service account the ComputeSecurity Admin role will allow it more IAM permissions than isnecessary for the purposes of this guide.Grant the service project's GKE service account the
Compute Security Adminrole to the host project.
Console
In the Google Cloud console, go to theIAM page.
Select the host project.
ClickGrant access, then enterthe service project's GKE service account principal,
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.Select the
Compute Security Adminrole from the drop-down list.ClickSave.
gcloud
Grant the service project's GKE service account theComputeSecurity Admin role within the host project:
gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com\--role=roles/compute.securityAdminReplace the following:
HOST_PROJECT_ID: the shared VPChost project IDSERVICE_PROJECT_NUM: the ID of the serviceproject containing the GKE service account
For a finer grained approach,create a custom IAM rolethat includes only the following permissions:
compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update, andcompute.firewalls.delete. Grant the serviceproject's GKE service account that custom role to the host project.
Console
Create a custom role within the host project containing theIAM permissions mentioned earlier:
In the Google Cloud console, go to theRoles page.
Using the drop-down list at the top of the page, select the hostproject.
ClickCreate Role.
Enter aTitle,Description,ID andRole launch stagefor the role. The role name cannot be changed after the role is created.
ClickAdd Permissions.
Filter for
compute.networksand select the IAMpermissions mentioned previously.Once all required permissions are selected, clickAdd.
ClickCreate.
Grant the service project's GKE service account the newlycreated custom role within the host project:
In the Google Cloud console, go to theIAM page.
Select the host project.
ClickGrant access, thenenter the service project's GKE service account principal,
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com.Filter for theTitle of the newly created custom role and select it.
ClickSave.
gcloud
Create a custom role within the host project containing theIAM permissions mentioned earlier:
gcloudiamrolescreateROLE_ID\--title="ROLE_TITLE"\--description="ROLE_DESCRIPTION"\--stage=LAUNCH_STAGE\--permissions=compute.networks.updatePolicy,compute.firewalls.list,compute.firewalls.get,compute.firewalls.create,compute.firewalls.update,compute.firewalls.delete\--project=HOST_PROJECT_IDGrant the service project's GKE service account the newlycreated custom role within the host project:
gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--member=serviceAccount:service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com\--role=projects/HOST_PROJECT_ID/roles/ROLE_IDReplace the following:
ROLE_ID: the name of the role, such asgkeFirewallAdminROLE_TITLE: a friendly title for the role, suchasGKE Firewall AdminROLE_DESCRIPTION: a short description of therole, such asGKE service account FW permissionsLAUNCH_STAGE: thelaunch stageof the role in its lifecycle, such asALPHA,BETA, orGAHOST_PROJECT_ID: the shared VPChost project IDSERVICE_PROJECT_NUM: the ID of the serviceproject containing the GKE service account
If you have clusters in more than one service project, you must choose one of thestrategies and repeat it for each service project's GKE serviceaccount.
Note: If you are using Ingress for internal Application Load Balancers, the Ingresscontroller does not create a firewall rule to allowconnections from the load balancer proxies in the proxy-subnet. You must createthis firewall rule manually. However, the Ingress controller does createfirewall rules to allow ingress for Google Cloud health-checks.Summary of roles granted on subnets
Here's a summary of the roles granted on the subnets:
| Service account | Role | Subnet |
|---|---|---|
| service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | tier-1 |
| SERVICE_PROJECT_1_NUM@cloudservices.gserviceaccount.com | Compute Network User | tier-1 |
| service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | tier-2 |
| SERVICE_PROJECT_2_NUM@cloudservices.gserviceaccount.com | Compute Network User | tier-2 |
Verify GKE access
When attaching a service project, enabling GKE access grants theservice project's GKE service account the permissions toperform network management operations in the host project.
GKE assigns the following role automatically in the hostproject when enabling GKE access:
| Member | Role | Resource |
|---|---|---|
| service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Host Service Agent User | GKE service account in the host project |
However, you must add theCompute Network User IAM permissionmanually to the GKE service account of the service project toaccess the host network.
| Member | Role | Resource |
|---|---|---|
| service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | Specific subnet or whole host project |
If a service project was attached without enabling GKE access,assuming the GKE API has already been enabled in both the hostand service project, you can manually assign the permissions to the serviceproject's GKE service account by adding the followingIAM role bindings in the host project:
| Member | Role | Resource |
|---|---|---|
| service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Compute Network User | Specific subnet or whole host project |
| service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.com | Host Service Agent User | GKE service agent in the host project |
Grant the Host Service Agent User role
Each service project's GKE service agent must have abinding for theHost Service Agent User(roles/container.hostServiceAgentUser) role on the host project. TheGKE service agent takes the following form:
service-SERVICE_PROJECT_NUM@container-engine-robot.iam.gserviceaccount.comWhereSERVICE_PROJECT_NUM is the project number ofyour service project.
This binding allows the service project's GKE service agentto perform network management operations in the host project, as if it were thehost project's GKE service agent. This role can only begranted to a service project's GKE service agent.
Console
If you have been using the Google Cloud console, you do not have to grantthe Host Service Agent User role explicitly. That was done automaticallywhen you used the Google Cloud console to attach service projects toyour host project.
gcloud
For your first project, grant the Host Service Agent User role to theproject's GKE service agent. This role is granted onyour host project:
gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUserFor your second project, grant the Host Service Agent User role to theproject's GKE service agent. This role is granted onyour host project:
gcloudprojectsadd-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser
Verify usable subnets and secondary IP address ranges
When creating a cluster, you must specify a subnet and the secondary IPaddress ranges to be used for the cluster's Pods and Services. There are severalreasons that an IP address range might not be available for use. Whether you arecreating the cluster with the Google Cloud console or thegcloud CLI, you should specify usable IP address ranges.
An IP address range is usable for the new cluster'sServices if the range isn't alreadyin use. The IP address range that you specify for the new cluster's Pods caneither be an unused range, or it can be a range that's shared with Pods in yourother clusters. IP address ranges that are created and managed byGKE can't be used by your cluster.
You can list a project's usable subnets and secondary IP address ranges by usingthe gcloud CLI.
gcloud
gcloudcontainersubnetslist-usable\--projectSERVICE_PROJECT_ID\--network-projectHOST_PROJECT_IDReplaceSERVICE_PROJECT_ID with the project ID ofthe service project.
If you omit the--project or--network-project option, thegcloud CLI command uses the default project from youractive configuration. Because the hostproject and network project are distinct, you must specify one or both of--project and--network-project.
The output is similar to the following:
PROJECT REGION NETWORK SUBNET RANGEexample-host-project us-west1 shared-net tier-1 10.0.4.0/22┌──────────────────────┬───────────────┬─────────────────────────────┐│ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │ STATUS │├──────────────────────┼───────────────┼─────────────────────────────┤│ tier-1-services │ 10.0.32.0/20 │ usable for pods or services ││ tier-1-pods │ 10.4.0.0/14 │ usable for pods or services │└──────────────────────┴───────────────┴─────────────────────────────┘example-host-project us-west1 shared-net tier-2 172.16.4.0/22┌──────────────────────┬────────────────┬─────────────────────────────┐│ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │ STATUS │├──────────────────────┼────────────────┼─────────────────────────────┤│ tier-2-services │ 172.16.16.0/20 │ usable for pods or services ││ tier-2-pods │ 172.20.0.0/14 │ usable for pods or services │└──────────────────────┴────────────────┴─────────────────────────────┘Thelist-usable command returns an empty list in the following situations:
- When the service project's GKE service account does nothave the Host Service Agent User role to the host project.
- When the GKE service account in the host project does notexist (for example, if you've deleted that account accidentally).
- When the GKE API is not enabled in the host project,which implies the GKE service account in the host project ismissing.
For more information, see thetroubleshooting section.
Secondary IP address range limits
You can create 30 secondary ranges in a given subnet. For each cluster, youneed two secondary ranges: one for Pods and one for Services.
Note: The primary range and the Pod secondary range can be shared betweenclusters, but this is not a recommended configuration.Create a cluster in your first service project
To create a cluster in your first service project, perform the following stepsusing the gcloud CLI or the Google Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Clickadd_boxCreate.
In the Autopilot or Standard section, clickConfigure.
ForName, enter
tier-1-cluster.In theRegion drop-down list, select the same region that you usedfor the subnets.
From the navigation pane, clickNetworking.
UnderCluster networking, clickNetworks shared with me.
In theNetwork field, selectshared-net.
ForNode subnet, selecttier-1.
UnderAdvanced networking options, forPod secondary CIDR range,selecttier-1-pods.
ForServices secondary CIDR range, selecttier-1-services.
ClickCreate.
If you created a Standard cluster, you can see that the nodesof the cluster are in the primary address range of the tier-1 subnetby doing the following:
- When the creation is complete, theCluster details page is displayed.
- Click theNodes tab.
- UnderNode Pools, clickdefault-pool.
- UnderInstance groups, click the name of the instance group you wantto inspect. For example, gke-tier-1-cluster-default-pool-5c5add1f-grp.
- In the list of instances, verify that the internal IP addresses of yournodes are in the primary range of the tier-1 subnet:
10.0.4.0/22.
gcloud
Create a cluster namedtier-1-cluster in your first service project:
gcloudcontainerclusterscreate-autotier-1-cluster\--project=SERVICE_PROJECT_1_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT_ID/global/networks/shared-net\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-1\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-servicesReplace theCONTROL_PLANE_LOCATION with the Compute Engineregion of the control plane of yourcluster.
When the creation is complete, verify that your cluster nodes arein the primary range of the tier-1 subnet:10.0.4.0/22.
gcloudcomputeinstanceslist--projectSERVICE_PROJECT_1_IDThe output shows the internal IP addresses of the nodes:
NAMEZONE...INTERNAL_IPgke-tier-1-cluster-...ZONE_NAME...10.0.4.2gke-tier-1-cluster-...ZONE_NAME...10.0.4.3gke-tier-1-cluster-...ZONE_NAME...10.0.4.4Create a cluster in your second service project
To create a cluster in your second service project, perform the following stepsusing the gcloud CLI or the Google Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the project picker, select your second service project.
Clickadd_boxCreate.
In the Standard or Autopilot section, clickConfigure.
ForName, enter
tier-2-cluster.In theRegion drop-down list, select the same region that you usedfor the subnets.
From the navigation pane, clickNetworking.
UnderCluster networking, clickNetworks shared with me.
In theNetwork field, selectshared-net.
ForNode subnet, selecttier-2.
UnderAdvanced networking options, forPod secondary CIDR range,selecttier-2-pods.
ForService secondary CIDR range, selecttier-2-services.
ClickCreate.
If you created a Standard cluster, you can see that the nodesof the cluster are in the primary address range of the tier-2 subnetby doing the following:
- When the creation is complete, theCluster details page is displayed.
- Click theNodes tab.
- UnderNode Pools, clickdefault-pool.
- UnderInstance groups, click the name of the instance group you wantto inspect. For example,
gke-tier-2-cluster-default-pool-5c5add1f-grp. - In the list of instances, verify that the internal IP addresses of yournodes are in the primary range of the tier-2 subnet:
172.16.4.0/22.
gcloud
Create a cluster namedtier-2-cluster in your second service project:
gcloudcontainerclusterscreate-autotier-2-cluster\--project=SERVICE_PROJECT_2_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT_ID/global/networks/shared-net\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/tier-2\--cluster-secondary-range-name=tier-2-pods\--services-secondary-range-name=tier-2-servicesWhen the creation is complete, verify that your cluster nodes arein the primary range of the tier-2 subnet:172.16.4.0/22.
gcloudcomputeinstanceslist--projectSERVICE_PROJECT_2_IDThe output shows the internal IP addresses of the nodes:
NAMEZONE...INTERNAL_IPgke-tier-2-cluster-...ZONE_NAME...172.16.4.2gke-tier-2-cluster-...ZONE_NAME...172.16.4.3gke-tier-2-cluster-...ZONE_NAME...172.16.4.4google_compute_project_metadata to track secondary IP address range usage. Fordetails and guidance on managing this metadata with Infrastructure as Code (IaC)tools, seeProject metadata for secondary IP addressranges.Create firewall rules
To allow traffic into the network and between the clusters within the network,you need to createfirewalls.The following sections demonstrate how to createand update firewall rules:
- Creating a firewall rule to enable SSH connection to a node:Demonstrates how to create a firewall rule that enables traffic from outsideof the clusters using SSH.
Updating the firewall rule to ping between nodes:Demonstrates how to update the firewall rule to permit ICMP traffic betweenthe clusters.
SSH and ICMP are used as examples, you must create firewall rules that enableyour specific application's networking requirements.
Create a firewall rule to enable SSH connection to a node
In your host project, create a firewall rule for theshared-net network.Allow traffic to enter on TCP port22, which permits you to connect toyour cluster nodes using SSH.
Console
Go to theFirewall page in the Google Cloud console.
In the project picker, select your host project.
From theVPC Networking menu, clickCreate Firewall Rule.
ForName, enter
my-shared-net-rule.ForNetwork, selectshared-net.
ForDirection of traffic, selectIngress.
ForAction on match, selectAllow.
ForTargets, selectAll instances in the network.
ForSource filter, selectIP ranges.
ForSource IP ranges, enter
0.0.0.0/0.ForProtocols and ports, selectSpecified protocols and ports.In the box, enter
tcp:22.ClickCreate.
gcloud
Create a firewall rule for your shared network:
gcloudcomputefirewall-rulescreatemy-shared-net-rule\--projectHOST_PROJECT_ID\--networkshared-net\--directionINGRESS\--allowtcp:22Connect to a node by using SSH
After creating the firewall that allows ingress traffic on TCP port22,connect to the node using SSH.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Clicktier-1-cluster.
On theCluster details page, click theNodes tab.
UnderNode Pools, click the name of your node pool.
UnderInstance groups, click the name of your instance group. Forexample, gke-tier-1-cluster-default-pool-faf87d48-grp.
In the list of instances, make a note of the internal IP addresses of thenodes. These addresses are in the
10.0.4.0/22range.For one of your nodes, clickSSH. This succeeds because SSH usesTCP port
22, which is allowed by your firewall rule.
gcloud
List the nodes in your first service project:
gcloudcomputeinstanceslist--projectSERVICE_PROJECT_1_IDThe output includes the names of the nodes in your cluster:
NAME...gke-tier-1-cluster-default-pool-faf87d48-3mf8...gke-tier-1-cluster-default-pool-faf87d48-q17k...gke-tier-1-cluster-default-pool-faf87d48-x9rk...Connect to one of your nodes using SSH:
gcloudcomputesshNODE_NAME\--projectSERVICE_PROJECT_1_ID\--zoneCOMPUTE_ZONEReplace the following:
NODE_NAME: the name of one of your nodes.COMPUTE_ZONE: the name of aCompute Engine zone within theregion.
Update the firewall rule to allow traffic between nodes
In your SSH command-line window, start theCoreOS Toolbox:
/usr/bin/toolboxIn the toolbox shell, ping one of your other nodes in the same cluster. Forexample:
ping10.0.4.4The
pingcommand succeeds, because your node and the other node are bothin the10.0.4.0/22range.Now, try to ping one of the nodes in the cluster in your other serviceproject. For example:
ping172.16.4.3This time the
pingcommand fails, because your firewall rule does notallow Internet Control Message Protocol (ICMP) traffic.At an ordinary command prompt, not your toolbox shell, Update your firewallrule to allow ICMP:
gcloudcomputefirewall-rulesupdatemy-shared-net-rule\--projectHOST_PROJECT_ID\--allowtcp:22,icmpIn your toolbox shell, ping the node again. For example:
ping172.16.4.3This time the
pingcommand succeeds.
Create additional firewall rules
You can create additional firewall rules to allow communication between nodes,Pods, and Services in your clusters.
For example, the following rule allows traffic to enter from any node, Pod, orService intier-1-cluster on any TCP or UDP port:
gcloudcomputefirewall-rulescreatemy-shared-net-rule-2\--projectHOST_PROJECT_ID\--networkshared-net\--allowtcp,udp\--directionINGRESS\--source-ranges10.0.4.0/22,10.4.0.0/14,10.0.32.0/20The following rule allows traffic to enter from any node, Pod, or Service intier-2-cluster on any TCP or UDP port:
gcloudcomputefirewall-rulescreatemy-shared-net-rule-3\--projectHOST_PROJECT_ID\--networkshared-net\--allowtcp,udp\--directionINGRESS\--source-ranges172.16.4.0/22,172.20.0.0/14,172.16.16.0/20Kubernetes will also try to create and manage firewall resources when necessary,for example when you create a load balancer service. If Kubernetes finds itselfunable to change the firewall rules due to a permission issue, a KubernetesEvent will be raised to guide you on how to make the changes.
If you want to grant Kubernetes permission to change the firewall rules, seeManage firewall resources.
For Ingress Load Balancers, if Kubernetes can't change the firewall rules due toinsufficient permission, afirewallXPNError event is emitted every severalminutes. InGLBC 1.4and later, you can mute thefirewallXPNError event byaddingnetworking.gke.io/suppress-firewall-xpn-error: "true" annotation to theingress resource. You can always remove this annotation to unmute.
Create a cluster based on VPC Network Peering in a Shared VPC
You can use Shared VPC with clusters based on VPC Network Peering.
This requires that you grant the following permissions on the host project, either tothe user account or to the service account, used to create the cluster:
compute.networks.getcompute.networks.updatePeering
You must also ensure that the control plane IP address range does not overlapwith other reserved ranges in the shared network.
In this section, you create a VPC-native cluster namedcluster-vpc in a predefined shared VPC network.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Clickadd_boxCreate.
In the Autopilot or Standard section, clickConfigure.
ForName, enter
cluster-vpc.From the navigation pane, clickNetworking.
In theCluster networking section, select theEnable Private nodes checkbox.
(Optional for Autopilot): SetControl plane IP range to
172.16.0.16/28.In theNetwork drop-down list, select the VPC network you createdpreviously.
In theNode subnet drop-down list, select the shared subnet youcreated previously.
Configure your cluster as needed.
ClickCreate.
gcloud
Run the following command to create a cluster namedcluster-vpc ina predefined Shared VPC:
gcloudcontainerclusterscreate-autoprivate-cluster-vpc\--project=PROJECT_ID\--location=CONTROL_PLANE_LOCATION\--network=projects/HOST_PROJECT/global/networks/shared-net\--subnetwork=SHARED_SUBNETWORK\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services\--enable-private-nodes\--master-ipv4-cidr=172.16.0.0/28Reserve IP addresses
You can reserveinternalandexternal IP addressesfor your Shared VPC clusters. Ensure that the IP addresses are reserved in theservice project.
For internal IP addresses, you must provide the subnetwork where the IP addressbelongs. To reserve an IP address across projects, use the full resource URL toidentify the subnetwork.
You can use the following command in the Google Cloud CLI to reserve an internalIP address:
gcloudcomputeaddressescreateRESERVED_IP_NAME\--region=COMPUTE_REGION\--subnet=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNETWORK_NAME\--addresses=IP_ADDRESS\--project=SERVICE_PROJECT_IDTo call this command, you must have thecompute.subnetworks.usepermission added to the subnetwork. You can eithergrant the caller acompute.networkUser role on the subnetwork,or you cangrant the caller a customized role withcompute.subnetworks.use permission at the project level.
Clean up
After completing the exercises in this guide, perform the following tasks toremove the resources to prevent unwanted charges incurring on your account:
Delete the clusters
Delete the two clusters you created.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the project picker, select your first service project.
Select thetier-1-cluster, and clickDelete.
In the project picker, select your second service project.
Select thetier-2-cluster, and clickDelete.
gcloud
gcloudcontainerclustersdeletetier-1-cluster\--projectSERVICE_PROJECT_1_ID\--locationCONTROL_PLANE_LOCATIONgcloudcontainerclustersdeletetier-2-cluster\--projectSERVICE_PROJECT_2_ID\--locationCONTROL_PLANE_LOCATIONDisable Shared VPC
Disable Shared VPC in your host project.
Console
Go to theShared VPC page in the Google Cloud console.
In the project picker, select your host project.
ClickDisable Shared VPC.
Enter the
HOST_PROJECT_IDin the field, andclickDisable.
gcloud
gcloudcomputeshared-vpcassociated-projectsremoveSERVICE_PROJECT_1_ID\--host-projectHOST_PROJECT_IDgcloudcomputeshared-vpcassociated-projectsremoveSERVICE_PROJECT_2_ID\--host-projectHOST_PROJECT_IDgcloudcomputeshared-vpcdisableHOST_PROJECT_IDDelete your firewall rules
Remove the firewall rules you created.
Console
Go to theFirewall page in the Google Cloud console.
In the project picker, select your host project.
In the list of rules, selectmy-shared-net-rule,my-shared-net-rule-2, andmy-shared-net-rule-3.
ClickDelete.
gcloud
Delete your firewall rules:
gcloudcomputefirewall-rulesdelete\my-shared-net-rule\my-shared-net-rule-2\my-shared-net-rule-3\--projectHOST_PROJECT_IDDelete the shared network
Delete the shared network you created.
Console
Go to theVPC networks page in the Google Cloud console.
In the project picker, select your host project.
In the list of networks, click theshared-net link.
ClickDelete VPC Network.
gcloud
gcloudcomputenetworkssubnetsdeletetier-1\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONgcloudcomputenetworkssubnetsdeletetier-2\--projectHOST_PROJECT_ID\--regionCOMPUTE_REGIONgcloudcomputenetworksdeleteshared-net--projectHOST_PROJECT_IDRemove the Host Service Agent User role
Remove the Host Service Agent User roles from your two service projects.
Console
Go to theIAM page in the Google Cloud console.
In the project picker, select your host project.
SelectInclude Google-provided role grants.
In the list of members, select the row that shows
service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.comis granted the Kubernetes Engine Host Service Agent User role.ClickeditEditprincipal.
UnderKubernetes Engine Host Service Agent User, click thedelete icon to removethe role.
ClickSave.
Select the row that shows
service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.comis granted the Kubernetes Engine Host Service Agent User role.ClickeditEditprincipal.
UnderKubernetes Engine Host Service Agent User, click thedelete icon to removethe role.
ClickSave.
gcloud
Remove the Host Service Agent User role from the GKEservice account of your first service project:
gcloudprojectsremove-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_1_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUserRemove the Host Service Agent User role from the GKEservice account of your second service project:
gcloudprojectsremove-iam-policy-bindingHOST_PROJECT_ID\--memberserviceAccount:service-SERVICE_PROJECT_2_NUM@container-engine-robot.iam.gserviceaccount.com\--roleroles/container.hostServiceAgentUser
Troubleshooting
The following sections help you to resolve common issues with Shared VPCclusters.
Error: Failed to get metadata from network project
The following message is a common error when working with Shared VPCclusters:
Failed to get metadata from network project. GCE_PERMISSION_DENIED: GoogleCompute Engine: Required 'compute.projects.get' permission for'projects/HOST_PROJECT_IDThis error can occur for the following reasons:
The GKE API has not been enabled in the host project.
The host project's GKE service account does not exist. Forexample, it might have been deleted.
The host project's GKE service account does not have theKubernetes Engine Service Agent (
container.serviceAgent) role in thehost project. The binding might have been accidentally removed.The service project's GKE service account does not have the HostService Agent User role in the host project.
To resolve the issue, determine whether the host project's GKEservice account exists.
If the service account doesn't exist, do the following:
If the GKE API is not enabled in the host project, enable it.This creates the host project's GKE service account and grantsthe host project's GKE service account theKubernetes EngineService Agent (
container.serviceAgent) role in the host project.If the GKE API is enabled in the host project, this means that eitherthe host project's GKE service account has been deleted or itdoesn't have theKubernetes Engine Service Agent (
Note: Disabling then re-enabling the GKE API in the host projectwon't impact the operation ofexisting clusters in service projects. However,you cannot create new clusters that use a Shared VPC network, in aservice project or in the host project, while the GKE API isdisabled.container.serviceAgent)role in the host project. To restore the GKE service account orthe role binding, you must disable then re-enable the GKE API. Formore information, seeError 400/403: Missing edit permissions on account.
Issue: Connectivity
If you're experiencing connectivity issues between Compute Engine VMsthat are in the same Virtual Private Cloud (VPC) network or twoVPC networks connected with VPC Network Peering, refer toTroubleshooting connectivity between virtual machine (VM) instances withinternal IP addresses in theVirtual Private Cloud (VPC) documentation.
Issue: Packet loss
If you're experiencing issues with packet loss when sending traffic from acluster to an external IP address usingCloud NAT,VPC-native clusters, orIP masquerade agent,seeTroubleshoot Cloud NAT packet loss from a cluster.
What's next
- Read the Shared VPC overview.
- Learn about provisioning a Shared VPC.
- Read the GKE network overview.
- Read about automatically created firewall rules.
- Learn how to troubleshoot connectivity between virtual machine (VM) instances with internal IP addresses.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.